Page 4 of 12
1 2 3 4 5 6 12

Robotic cubes shapeshift in outer space

MIT PhD student Martin Nisser tests self-reconfiguring robot blocks, or ElectroVoxels, in microgravity. Photo: Steve Boxall/ZeroG

By Rachel Gordon | MIT CSAIL

If faced with the choice of sending a swarm of full-sized, distinct robots to space, or a large crew of smaller robotic modules, you might want to enlist the latter. Modular robots, like those depicted in films such as “Big Hero 6,” hold a special type of promise for their self-assembling and reconfiguring abilities. But for all of the ambitious desire for fast, reliable deployment in domains extending to space exploration, search and rescue, and shape-shifting, modular robots built to date are still a little clunky. They’re typically built from a menagerie of large, expensive motors to facilitate movement, calling for a much-needed focus on more scalable architectures — both up in quantity and down in size.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) called on electromagnetism — electromagnetic fields generated by the movement of electric current — to avoid the usual stuffing of bulky and expensive actuators into individual blocks. Instead, they embedded small, easily manufactured, inexpensive electromagnets into the edges of the cubes that repel and attract, allowing the robots to spin and move around each other and rapidly change shape.

The “ElectroVoxels” have a side length of about 60 millimeters, and the magnets consist of ferrite core (they look like little black tubes) wrapped with copper wire, totaling a whopping cost of just 60 cents. Inside each cube are tiny printed circuit boards and electronics that send current through the right electromagnet in the right direction.

Unlike traditional hinges that require mechanical attachments between two elements, ElectroVoxels are completely wireless, making it much easier to maintain and manufacture for a large-scale system.

ElectroVoxels are robotic cubes that can reconfigure using electromagnets. The cubes don’t need motors or propellant to move, and can operate in microgravity.

To better visualize what a bunch of blocks would look like while interacting, the scientists used a software planner that visualizes reconfigurations and computes the underlying electromagnetic assignments. A user can manipulate up to a thousand cubes with just a few clicks, or use predefined scripts that encode multiple, consecutive rotations. The system really lets the user drive the fate of the blocks, within reason — you can change the speed, highlight the magnets, and display necessary moves to avoid collisions. You can instruct the blocks to take on different shapes (like a chair to a couch, because who needs both?)

The cheap little blocks are particularly auspicious for microgravity environments, where any structure that you want to launch to orbit needs to fit inside the rocket used to launch it. After initial tests on an air table, ElextroVoxels found true weightlessness when tested in a microgravity flight, with the overall impetus of better space exploration tools like propellant-free reconfiguration or changing the inertia properties of a spacecraft.

By leveraging propellant-free actuation, for example, there’s no need to launch extra fuel for reconfiguration, which addresses many of the challenges associated with launch mass and volume. The hope, then, is that this reconfigurability method could aid myriad future space endeavors: augmentation and replacement of space structures over multiple launches, temporary structures to help with spacecraft inspection and astronaut assistance, and (future iterations) of the cubes acting as self-sorting storage containers.

“ElectroVoxels show how to engineer a fully reconfigurable system, and exposes our scientific community to the challenges that need to be tackled to have a fully functional modular robotic system in orbit,” says Dario Izzo, head of the Advanced Concepts Team at the European Space Agency. “This research demonstrates how electromagnetically actuated pivoting cubes are simple to build, operate, and maintain, enabling a flexible, modular and reconfigurable system that can serve as an inspiration to design intelligent components of future exploration missions.”

To make the blocks move, they have to follow a sequence, like little homogeneous Tetris pieces. In this case, there are three steps to the polarization sequence: launch, travel, and catch, with each phase having a traveling cube (for moving), an origin one (where the traveling cube launches), and destination (which catches the traveling cube). Users of the software can specify which cube to pivot in what direction, and the algorithm will automatically compute the sequence and address of electromagnetic assignments required to make that happen (repel, attract, or turn off).

For future work, moving from space to Earth is the natural next step for ElectroVoxels, which would require doing more detailed modeling and optimization of these electromagnets to do reconfiguration against gravity here.

“When building a large, complex structure, you don’t want to be constrained by the availability and expertise of people assembling it, the size of your transportation vehicle, or the adverse environmental conditions of the assembly site. While these axioms hold true on Earth, they compound severely for building things in space,” says MIT CSAIL PhD student Martin Nisser, the lead author on a paper about ElectroVoxels. “If you could have structures that assemble themselves from simple, homogeneous modules, you could eliminate a lot of these problems. So while the potential benefits in space are particularly great, the paradox is that the favorable dynamics provided by microgravity mean some of those problems are actually also easier to solve — in space, even tiny forces can make big things move. By applying this technology to solve real near-term problems in space, we can hopefully incubate the technology for future use on earth too.”

Nisser wrote the paper alongside Leon Cheng and Yashaswini Makaram of MIT CSAIL; Ryo Suzuki, assistant professor of computer science at the University of Calgary; and MIT Professor Stefanie Mueller. They will present the work at the 2022 International Conference on Robotics and Automation. The work was supported, in part, by The MIT Space Exploration Initiative.

Q&A: Cathy Wu on developing algorithms to safely integrate robots into our world

Cathy Wu is the Gilbert W. Winslow Assistant Professor of Civil and Environmental Engineering and a member of the MIT Institute for Data, Systems, and Society.

By Kim Martineau | MIT Schwarzman College of Computing

Cathy Wu is the Gilbert W. Winslow Assistant Professor of Civil and Environmental Engineering and a member of the MIT Institute for Data, Systems, and Society. As an undergraduate, Wu won MIT’s toughest robotics competition, and as a graduate student took the University of California at Berkeley’s first-ever course on deep reinforcement learning. Now back at MIT, she’s working to improve the flow of robots in Amazon warehouses under the Science Hub, a new collaboration between the tech giant and the MIT Schwarzman College of Computing. Outside of the lab and classroom, Wu can be found running, drawing, pouring lattes at home, and watching YouTube videos on math and infrastructure via 3Blue1Brown and Practical Engineering. She recently took a break from all of that to talk about her work.

Q: What put you on the path to robotics and self-driving cars?

A: My parents always wanted a doctor in the family. However, I’m bad at following instructions and became the wrong kind of doctor! Inspired by my physics and computer science classes in high school, I decided to study engineering. I wanted to help as many people as a medical doctor could.

At MIT, I looked for applications in energy, education, and agriculture, but the self-driving car was the first to grab me. It has yet to let go! Ninety-four percent of serious car crashes are caused by human error and could potentially be prevented by self-driving cars. Autonomous vehicles could also ease traffic congestion, save energy, and improve mobility.

I first learned about self-driving cars from Seth Teller during his guest lecture for the course Mobile Autonomous Systems Lab (MASLAB), in which MIT undergraduates compete to build the best full-functioning robot from scratch. Our ball-fetching bot, Putzputz, won first place. From there, I took more classes in machine learning, computer vision, and transportation, and joined Teller’s lab. I also competed in several mobility-related hackathons, including one sponsored by Hubway, now known as Blue Bike.

Q: You’ve explored ways to help humans and autonomous vehicles interact more smoothly. What makes this problem so hard?

A: Both systems are highly complex, and our classical modeling tools are woefully insufficient. Integrating autonomous vehicles into our existing mobility systems is a huge undertaking. For example, we don’t know whether autonomous vehicles will cut energy use by 40 percent, or double it. We need more powerful tools to cut through the uncertainty. My PhD thesis at Berkeley tried to do this. I developed scalable optimization methods in the areas of robot control, state estimation, and system design. These methods could help decision-makers anticipate future scenarios and design better systems to accommodate both humans and robots.

Q: How is deep reinforcement learning, combining deep and reinforcement learning algorithms, changing robotics?

A: I took John Schulman and Pieter Abbeel’s reinforcement learning class at Berkeley in 2015 shortly after Deepmind published their breakthrough paper in Nature. They had trained an agent via deep learning and reinforcement learning to play “Space Invaders” and a suite of Atari games at superhuman levels. That created quite some buzz. A year later, I started to incorporate reinforcement learning into problems involving mixed traffic systems, in which only some cars are automated. I realized that classical control techniques couldn’t handle the complex nonlinear control problems I was formulating.

Deep RL is now mainstream but it’s by no means pervasive in robotics, which still relies heavily on classical model-based control and planning methods. Deep learning continues to be important for processing raw sensor data like camera images and radio waves, and reinforcement learning is gradually being incorporated. I see traffic systems as gigantic multi-robot systems. I’m excited for an upcoming collaboration with Utah’s Department of Transportation to apply reinforcement learning to coordinate cars with traffic signals, reducing congestion and thus carbon emissions.

Q: You’ve talked about the MIT course, 6.007 (Signals and Systems), and its impact on you. What about it spoke to you?

A: The mindset. That problems that look messy can be analyzed with common, and sometimes simple, tools. Signals are transformed by systems in various ways, but what do these abstract terms mean, anyway? A mechanical system can take a signal like gears turning at some speed and transform it into a lever turning at another speed. A digital system can take binary digits and turn them into other binary digits or a string of letters or an image. Financial systems can take news and transform it via millions of trading decisions into stock prices. People take in signals every day through advertisements, job offers, gossip, and so on, and translate them into actions that in turn influence society and other people. This humble class on signals and systems linked mechanical, digital, and societal systems and showed me how foundational tools can cut through the noise.

Q: In your project with Amazon you’re training warehouse robots to pick up, sort, and deliver goods. What are the technical challenges?

A: This project involves assigning robots to a given task and routing them there. [Professor] Cynthia Barnhart’s team is focused on task assignment, and mine, on path planning. Both problems are considered combinatorial optimization problems because the solution involves a combination of choices. As the number of tasks and robots increases, the number of possible solutions grows exponentially. It’s called the curse of dimensionality. Both problems are what we call NP Hard; there may not be an efficient algorithm to solve them. Our goal is to devise a shortcut.

Routing a single robot for a single task isn’t difficult. It’s like using Google Maps to find the shortest path home. It can be solved efficiently with several algorithms, including Dijkstra’s. But warehouses resemble small cities with hundreds of robots. When traffic jams occur, customers can’t get their packages as quickly. Our goal is to develop algorithms that find the most efficient paths for all of the robots.

Q: Are there other applications?

A: Yes. The algorithms we test in Amazon warehouses might one day help to ease congestion in real cities. Other potential applications include controlling planes on runways, swarms of drones in the air, and even characters in video games. These algorithms could also be used for other robotic planning tasks like scheduling and routing.

Q: AI is evolving rapidly. Where do you hope to see the big breakthroughs coming?

A: I’d like to see deep learning and deep RL used to solve societal problems involving mobility, infrastructure, social media, health care, and education. Deep RL now has a toehold in robotics and industrial applications like chip design, but we still need to be careful in applying it to systems with humans in the loop. Ultimately, we want to design systems for people. Currently, we simply don’t have the right tools.

Q: What worries you most about AI taking on more and more specialized tasks?

A: AI has the potential for tremendous good, but it could also help to accelerate the widening gap between the haves and the have-nots. Our political and regulatory systems could help to integrate AI into society and minimize job losses and income inequality, but I worry that they’re not equipped yet to handle the firehose of AI.

Q: What’s the last great book you read?

A:How to Avoid a Climate Disaster,” by Bill Gates. I absolutely loved the way that Gates was able to take an overwhelmingly complex topic and distill it down into words that everyone can understand. His optimism inspires me to keep pushing on applications of AI and robotics to help avoid a climate disaster.

Giving bug-like bots a boost

MIT researchers have pioneered a new fabrication technique that enables them to produce low-voltage, power-dense, high endurance soft actuators for an aerial microrobot. Credits: Courtesy of the researchers

By Adam Zewe | MIT News Office

When it comes to robots, bigger isn’t always better. Someday, a swarm of insect-sized robots might pollinate a field of crops or search for survivors amid the rubble of a collapsed building.

MIT researchers have demonstrated diminutive drones that can zip around with bug-like agility and resilience, which could eventually perform these tasks. The soft actuators that propel these microrobots are very durable, but they require much higher voltages than similarly-sized rigid actuators. The featherweight robots can’t carry the necessary power electronics that would allow them fly on their own.

Now, these researchers have pioneered a fabrication technique that enables them to build soft actuators that operate with 75 percent lower voltage than current versions while carrying 80 percent more payload. These soft actuators are like artificial muscles that rapidly flap the robot’s wings.

The artificial muscles vastly improve the robot’s payload and allow it to achieve best-in-class hovering performance. Image: Kevin Chen

This new fabrication technique produces artificial muscles with fewer defects, which dramatically extends the lifespan of the components and increases the robot’s performance and payload.   

“This opens up a lot of opportunity in the future for us to transition to putting power electronics on the microrobot. People tend to think that soft robots are not as capable as rigid robots. We demonstrate that this robot, weighing less than a gram, flies for the longest time with the smallest error during a hovering flight. The take-home message is that soft robots can exceed the performance of rigid robots,” says Kevin Chen, who is the D. Reid Weedon, Jr. ’41 assistant professor in the Department of Electrical Engineering and Computer Science, the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper.

Chen’s coauthors include Zhijian Ren and Suhan Kim, co-lead authors and EECS graduate students; Xiang Ji, a research scientist in EECS; Weikun Zhu, a chemical engineering graduate student; Farnaz Niroui, an assistant professor in EECS; and Jing Kong, a professor in EECS and principal investigator in RLE. The research has been accepted for publication in Advanced Materials and is included in the jounal’s Rising Stars series, which recognizes outstanding works from early-career researchers.

Making muscles

The rectangular microrobot, which weighs less than one-fourth of a penny, has four sets of wings that are each driven by a soft actuator. These muscle-like actuators are made from layers of elastomer that are sandwiched between two very thin electrodes and then rolled into a squishy cylinder. When voltage is applied to the actuator, the electrodes squeeze the elastomer, and that mechanical strain is used to flap the wing.

The rectangular microrobot, which weighs less than one-fourth of a penny, has four sets of wings that are each driven by a soft actuator. Credits: Courtesy of the researchers

The more surface area the actuator has, the less voltage is required. So, Chen and his team build these artificial muscles by alternating between as many ultrathin layers of elastomer and electrode as they can. As elastomer layers get thinner, they become more unstable.

For the first time, the researchers were able to create an actuator with 20 layers, each of which is 10 micrometers in thickness (about the diameter of a red blood cell). But they had to reinvent parts of the fabrication process to get there.

One major roadblock came from the spin coating process. During spin coating, an elastomer is poured onto a flat surface and rapidly rotated, and the centrifugal force pulls the film outward to make it thinner.

“In this process, air comes back into the elastomer and creates a lot of microscopic air bubbles. The diameter of these air bubbles is barely 1 micrometer, so previously we just sort of ignored them. But when you get thinner and thinner layers, the effect of the air bubbles becomes stronger and stronger. That is traditionally why people haven’t been able to make these very thin layers,” Chen explains.

He and his collaborators found that if they perform a vacuuming process immediately after spin coating, while the elastomer was still wet, it removes the air bubbles. Then, they bake the elastomer to dry it.

Removing these defects increases the power output of the actuator by more than 300 percent and significantly improves its lifespan, Chen says.

The researchers also optimized the thin electrodes, which are composed of carbon nanotubes, super-strong rolls of carbon that are about 1/50,000 the diameter of human hair. Higher concentrations of carbon nanotubes increase the actuator’s power output and reduce voltage, but dense layers also contain more defects.

For instance, the carbon nanotubes have sharp ends and can pierce the elastomer, which causes the device to short out, Chen explains. After much trial and error, the researchers found the optimal concentration.

Another problem comes from the curing stage — as more layers are added, the actuator takes longer and longer to dry.

“The first time I asked my student to make a multilayer actuator, once he got to 12 layers, he had to wait two days for it to cure. That is totally not sustainable, especially if you want to scale up to more layers,” Chen says.

They found that baking each layer for a few minutes immediately after the carbon nanotubes are transferred to the elastomer cuts down the curing time as more layers are added.

Best-in-class performance

After using this technique to create a 20-layer artificial muscle, they tested it against their previous six-layer version and state-of-the-art, rigid actuators.

During liftoff experiments, the 20-layer actuator, which requires less than 500 volts to operate, exerted enough power to give the robot a lift-to-weight ratio of 3.7 to 1, so it could carry items that are nearly three times its weight.

“We demonstrate that this robot, weighing less than a gram, flies for the longest time with the smallest error during a hovering flight,” says Kevin Chen. Credits: Courtesy of the researchers

They also demonstrated a 20-second hovering flight, which Chen says is the longest ever recorded by a sub-gram robot. Their hovering robot held its position more stably than any of the others. The 20-layer actuator was still working smoothly after being driven for more than 2 million cycles, far outpacing the lifespan of other actuators.

“Two years ago, we created the most power-dense actuator and it could barely fly. We started to wonder, can soft robots ever compete with rigid robots? We observed one defect after another, so we kept working and we solved one fabrication problem after another, and now the soft actuator’s performance is catching up. They are even a little bit better than the state-of-the-art rigid ones. And there are still a number of fabrication processes in material science that we don’t understand. So, I am very excited to continue to reduce actuation voltage,” he says.

Chen looks forward to collaborating with Niroui to build actuators in a clean room at MIT.nano and leverage nanofabrication techniques. Now, his team is limited to how thin they can make the layers due to dust in the air and a maximum spin coating speed. Working in a clean room eliminates this problem and would allow them to use methods, such as doctor blading, that are more precise than spin coating.

While Chen is thrilled about producing 10-micrometer actuator layers, his hope is to reduce the thickness to only 1 micrometer, which would open the door to many applications for these insect-sized robots.

This work is supported, in part, by the MIT Research Laboratory of Electronics and a Mathworks Graduate Fellowship.

Meet the Oystamaran

MIT students and researchers from MIT Sea Grant work with local oyster farmers in advancing the aquaculture industry by seeking solutions to some of its biggest challenges. Currently, oyster bags have to be manually flipped every one to two weeks to reduce biofouling. Image: John Freidah, MIT MechE

By Michaela Jarvis | Department of Mechanical Engineering

When Michelle Kornberg was about to graduate from MIT, she wanted to use her knowledge of mechanical and ocean engineering to make the world a better place. Luckily, she found the perfect senior capstone class project: supporting sustainable seafood by helping aquaculture farmers grow oysters.

“It’s our responsibility to use our skills and opportunities to work on problems that really matter,” says Kornberg, who now works for an aquaculture company called Innovasea. “Food sustainability is incredibly important from an environmental standpoint, of course, but it also matters on a social level. The most vulnerable will be hurt worst by the climate crisis, and I think food sustainability and availability really matters on that front.”

The project undertaken by Kornberg’s capstone class, 2.017 (Design of Electromechanical Robotic Systems), came out of conversations between Michael Triantafyllou, who is MIT’s Henry L. and Grace Doherty Professor in Ocean Science and Engineering and director of MIT Sea Grant, and Dan Ward. Ward, a seasoned oyster farmer and marine biologist, owns Ward Aquafarms on Cape Cod and has worked extensively to advance the aquaculture industry by seeking solutions to some of its biggest challenges.

Speaking with Triantafyllou at MIT Sea Grant — part of a network of university-based programs established by the federal government to protect the coastal environment and economy — Ward had explained that each of his thousands of floating mesh oyster bags need to be turned over about 11 times a year. The flipping allows algae, barnacles, and other “biofouling” organisms that grow on the part of the bag beneath the water’s surface to be exposed to air and light, so they can dry and chip off. If this task is not performed, water flow to the oysters, which is necessary for their growth, is blocked.

The bags are flipped by a farmworker in a kayak, and the task is monotonous, often performed in rough water and bad weather, and ergonomically injurious. “It’s kind of awful, generally speaking,” Ward says, adding that he pays about $3,500 per year to have the bags turned over at each of his two farm sites — and struggles to find workers who want to do the job of flipping bags that can grow to a weight of 60 or 70 pounds just before the oysters are harvested.

Presented with this problem, the capstone class Kornberg was in — composed of six students in mechanical engineering, ocean engineering, and electrical engineering and computer science — brainstormed solutions. Most of the solutions, Kornberg says, involved an autonomous robot that would take over the bag-flipping. It was during that class that the original version of the “Oystamaran,” a catamaran with a flipping mechanism between its two hulls, was born.

A combination of mechanical engineering, ocean engineering, and electrical engineering and computer sciences students work together to design a robot to help with flipping oyster bags at Ward Aquafarm on Cape Cod. The “Oystamaran” robot uses a vision system to position and flip the bags. Image: Lauren Futami, MIT MechE

Ward’s involvement in the project has been important to its evolution. He says he has reviewed many projects in his work on advisory boards that propose new technologies for aquaculture. Often, they don’t correspond with the actual challenges faced by the industry.

“It was always ‘I already have this remotely operated vehicle; would it be useful to you as an oyster farmer if I strapped on some kind of sensor?’” Ward says. “They try to fit robotics into aquaculture without any industry collaboration, which leads to a robotic product that doesn’t solve any of the issues we experience out on the farm. Having the opportunity to work with MIT Sea Grant to really start from the ground up has been exciting. Their approach has been, ‘What’s the problem, and what’s the best way to solve the problem?’ We do have a real need for robotics in aquaculture, but you have to come at it from the customer-first, not the technology-first, perspective.”

Triantafyllou says that while the task the robot performs is similar to work done by robots in other industries, the “special difficulty” students faced while designing the Oystamaran was its work environment.

“You have a floating device, which must be self-propelled, and which must find these objects in an environment that is not neat,” Triantafyllou says. “It’s a combination of vision and navigation in an environment that changes, with currents, wind, and waves. Very quickly, it becomes a complicated task.”

Kornberg, who had constructed the original central flipping mechanism and the basic structure of the vessel as a staff member at MIT Sea Grant after graduating in May 2020, worked as a lab instructor for the next capstone class related to the project in spring 2021. Andrew Bennett, education administrator at MIT Sea Grant, co-taught that class, in which students designed an Oystamaran version 2.0, which was tested at Ward Aquafarms and managed to flip several rows of bags while being controlled remotely. Next steps will involve making the vessel more autonomous, so it can be launched, navigate autonomously to the oyster bags, flip them, and return to the launching point. A third capstone class related to the project will take place this spring.

The students operate the “Oystamaran” robot remotely from the boat. Image: John Freidah, MIT MechE

Bennett says an ideal project outcome would be, “We have proven the concept, and now somebody in industry says, ‘You know, there’s money to be made in oysters. I think I’ll take over.’ And then we hand it off to them.”  

Meanwhile, he says an unexpected challenge arose with getting the Oystamaran to go between tightly packed rows of oyster bags in the center of an array.

“How does a robot shimmy in between things without wrecking something? It’s got to wiggle in somehow, which is a fascinating controls problem,” Bennett says, adding that the problem is a source of excitement, rather than frustration, to him. “I love a new challenge, and I really love when I find a problem that no one expected. Those are the fun ones.”

Triantafyllou calls the Oystamaran “a first for the industry,” explaining that the project has demonstrated that robots can perform extremely useful tasks in the ocean, and will serve as a model for future innovations in aquaculture.

“Just by showing the way, this may be the first of a number of robots,” he says. “It will attract talent to ocean farming, which is a great challenge, and also a benefit for society to have a reliable means of producing food from the ocean.”

One giant leap for the mini cheetah

MIT researchers have developed a system that improves the speed and agility of legged robots as they jump across gaps in the terrain. Credits: Photo courtesy of the researchers

By Adam Zewe | MIT News Office

A loping cheetah dashes across a rolling field, bounding over sudden gaps in the rugged terrain. The movement may look effortless, but getting a robot to move this way is an altogether different prospect.

In recent years, four-legged robots inspired by the movement of cheetahs and other animals have made great leaps forward, yet they still lag behind their mammalian counterparts when it comes to traveling across a landscape with rapid elevation changes.

“In those settings, you need to use vision in order to avoid failure. For example, stepping in a gap is difficult to avoid if you can’t see it. Although there are some existing methods for incorporating vision into legged locomotion, most of them aren’t really suitable for use with emerging agile robotic systems,” says Gabriel Margolis, a PhD student in the lab of Pulkit Agrawal, professor in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

Now, Margolis and his collaborators have developed a system that improves the speed and agility of legged robots as they jump across gaps in the terrain. The novel control system is split into two parts — one that processes real-time input from a video camera mounted on the front of the robot and another that translates that information into instructions for how the robot should move its body. The researchers tested their system on the MIT mini cheetah, a powerful, agile robot built in the lab of Sangbae Kim, professor of mechanical engineering.

Unlike other methods for controlling a four-legged robot, this two-part system does not require the terrain to be mapped in advance, so the robot can go anywhere. In the future, this could enable robots to charge off into the woods on an emergency response mission or climb a flight of stairs to deliver medication to an elderly shut-in.

Margolis wrote the paper with senior author Pulkit Agrawal, who heads the Improbable AI lab at MIT and is the Steven G. and Renee Finn Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science; Professor Sangbae Kim in the Department of Mechanical Engineering at MIT; and fellow graduate students Tao Chen and Xiang Fu at MIT. Other co-authors include Kartik Paigwar, a graduate student at Arizona State University; and Donghyun Kim, an assistant professor at the University of Massachusetts at Amherst. The work will be presented next month at the Conference on Robot Learning.

It’s all under control

The use of two separate controllers working together makes this system especially innovative.

A controller is an algorithm that will convert the robot’s state into a set of actions for it to follow. Many blind controllers — those that do not incorporate vision — are robust and effective but only enable robots to walk over continuous terrain.

Vision is such a complex sensory input to process that these algorithms are unable to handle it efficiently. Systems that do incorporate vision usually rely on a “heightmap” of the terrain, which must be either preconstructed or generated on the fly, a process that is typically slow and prone to failure if the heightmap is incorrect.

To develop their system, the researchers took the best elements from these robust, blind controllers and combined them with a separate module that handles vision in real-time.

The robot’s camera captures depth images of the upcoming terrain, which are fed to a high-level controller along with information about the state of the robot’s body (joint angles, body orientation, etc.). The high-level controller is a neural network that “learns” from experience.

That neural network outputs a target trajectory, which the second controller uses to come up with torques for each of the robot’s 12 joints. This low-level controller is not a neural network and instead relies on a set of concise, physical equations that describe the robot’s motion.

“The hierarchy, including the use of this low-level controller, enables us to constrain the robot’s behavior so it is more well-behaved. With this low-level controller, we are using well-specified models that we can impose constraints on, which isn’t usually possible in a learning-based network,” Margolis says.

Teaching the network

The researchers used the trial-and-error method known as reinforcement learning to train the high-level controller. They conducted simulations of the robot running across hundreds of different discontinuous terrains and rewarded it for successful crossings.

Over time, the algorithm learned which actions maximized the reward.

Then they built a physical, gapped terrain with a set of wooden planks and put their control scheme to the test using the mini cheetah.

“It was definitely fun to work with a robot that was designed in-house at MIT by some of our collaborators. The mini cheetah is a great platform because it is modular and made mostly from parts that you can order online, so if we wanted a new battery or camera, it was just a simple matter of ordering it from a regular supplier and, with a little bit of help from Sangbae’s lab, installing it,” Margolis says.

From left to right: PhD students Tao Chen and Gabriel Margolis; Pulkit Agrawal, the Steven G. and Renee Finn Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science; and PhD student Xiang Fu. Credits: Photo courtesy of the researchers

Estimating the robot’s state proved to be a challenge in some cases. Unlike in simulation, real-world sensors encounter noise that can accumulate and affect the outcome. So, for some experiments that involved high-precision foot placement, the researchers used a motion capture system to measure the robot’s true position.

Their system outperformed others that only use one controller, and the mini cheetah successfully crossed 90 percent of the terrains.

“One novelty of our system is that it does adjust the robot’s gait. If a human were trying to leap across a really wide gap, they might start by running really fast to build up speed and then they might put both feet together to have a really powerful leap across the gap. In the same way, our robot can adjust the timings and duration of its foot contacts to better traverse the terrain,” Margolis says.

Leaping out of the lab

While the researchers were able to demonstrate that their control scheme works in a laboratory, they still have a long way to go before they can deploy the system in the real world, Margolis says.

In the future, they hope to mount a more powerful computer to the robot so it can do all its computation on board. They also want to improve the robot’s state estimator to eliminate the need for the motion capture system. In addition, they’d like to improve the low-level controller so it can exploit the robot’s full range of motion, and enhance the high-level controller so it works well in different lighting conditions.

“It is remarkable to witness the flexibility of machine learning techniques capable of bypassing carefully designed intermediate processes (e.g. state estimation and trajectory planning) that centuries-old model-based techniques have relied on,” Kim says. “I am excited about the future of mobile robots with more robust vision processing trained specifically for locomotion.”

The research is supported, in part, by the MIT’s Improbable AI Lab, Biomimetic Robotics Laboratory, NAVER LABS, and the DARPA Machine Common Sense Program.

A robot that finds lost items

Researchers at MIT have developed a fully-integrated robotic arm that fuses visual data from a camera and radio frequency (RF) information from an antenna to find and retrieve objects, even when they are buried under a pile and fully out of view. Credits: Courtesy of the researchers

By Adam Zewe | MIT News Office

A busy commuter is ready to walk out the door, only to realize they’ve misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys.

Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.

The RFusion prototype the researchers developed relies on RFID tags, which are cheap, battery-less tags that can be stuck to an item and reflect signals sent by an antenna. Because RF signals can travel through most surfaces (like the mound of dirty laundry that may be obscuring the keys), RFusion is able to locate a tagged item within a pile.

Using machine learning, the robotic arm automatically zeroes-in on the object’s exact location, moves the items on top of it, grasps the object, and verifies that it picked up the right thing. The camera, antenna, robotic arm, and AI are fully integrated, so RFusion can work in any environment without requiring a special set up.

In this video still, the robotic arm is looking for keys hidden underneath items. Credits: Courtesy of the researchers

While finding lost keys is helpful, RFusion could have many broader applications in the future, like sorting through piles to fulfill orders in a warehouse, identifying and installing components in an auto manufacturing plant, or helping an elderly individual perform daily tasks in the home, though the current prototype isn’t quite fast enough yet for these uses.

“This idea of being able to find items in a chaotic world is an open problem that we’ve been working on for a few years. Having robots that are able to search for things under a pile is a growing need in industry today. Right now, you can think of this as a Roomba on steroids, but in the near term, this could have a lot of applications in manufacturing and warehouse environments,” said senior author Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab.

Co-authors include research assistant Tara Boroushaki, the lead author; electrical engineering and computer science graduate student Isaac Perper; research associate Mergen Nachin; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. The research will be presented at the Association for Computing Machinery Conference on Embedded Networked Senor Systems next month.

Sending signals

RFusion begins searching for an object using its antenna, which bounces signals off the RFID tag (like sunlight being reflected off a mirror) to identify a spherical area in which the tag is located. It combines that sphere with the camera input, which narrows down the object’s location. For instance, the item can’t be located on an area of a table that is empty.

But once the robot has a general idea of where the item is, it would need to swing its arm widely around the room taking additional measurements to come up with the exact location, which is slow and inefficient.

The researchers used reinforcement learning to train a neural network that can optimize the robot’s trajectory to the object. In reinforcement learning, the algorithm is trained through trial and error with a reward system.

“This is also how our brain learns. We get rewarded from our teachers, from our parents, from a computer game, etc. The same thing happens in reinforcement learning. We let the agent make mistakes or do something right and then we punish or reward the network. This is how the network learns something that is really hard for it to model,” Boroushaki explains.

In the case of RFusion, the optimization algorithm was rewarded when it limited the number of moves it had to make to localize the item and the distance it had to travel to pick it up.

Once the system identifies the exact right spot, the neural network uses combined RF and visual information to predict how the robotic arm should grasp the object, including the angle of the hand and the width of the gripper, and whether it must remove other items first. It also scans the item’s tag one last time to make sure it picked up the right object.

Cutting through clutter

The researchers tested RFusion in several different environments. They buried a keychain in a box full of clutter and hid a remote control under a pile of items on a couch.

But if they fed all the camera data and RF measurements to the reinforcement learning algorithm, it would have overwhelmed the system. So, drawing on the method a GPS uses to consolidate data from satellites, they summarized the RF measurements and limited the visual data to the area right in front of the robot.

Their approach worked well — RFusion had a 96 percent success rate when retrieving objects that were fully hidden under a pile.

“We let the agent make mistakes or do something right and then we punish or reward the network. This is how the network learns something that is really hard for it to model,” co-author Tara Boroushaki, pictured here, explains. Credits: Courtesy of the researchers

“Sometimes, if you only rely on RF measurements, there is going to be an outlier, and if you rely only on vision, there is sometimes going to be a mistake from the camera. But if you combine them, they are going to correct each other. That is what made the system so robust,” Boroushaki says.

In the future, the researchers hope to increase the speed of the system so it can move smoothly, rather than stopping periodically to take measurements. This would enable RFusion to be deployed in a fast-paced manufacturing or warehouse setting.

Beyond its potential industrial uses, a system like this could even be incorporated into future smart homes to assist people with any number of household tasks, Boroushaki says.

“Every year, billions of RFID tags are used to identify objects in today’s complex supply chains, including clothing and lots of other consumer goods. The RFusion approach points the way to autonomous robots that can dig through a pile of mixed items and sort them out using the data stored in the RFID tags, much more efficiently than having to inspect each item individually, especially when the items look similar to a computer vision system,” says Matthew S. Reynolds, CoMotion Presidential Innovation Fellow and associate professor of electrical and computer engineering at the University of Washington, who was not involved in the research. “The RFusion approach is a great step forward for robotics operating in complex supply chains where identifying and ‘picking’ the right item quickly and accurately is the key to getting orders fulfilled on time and keeping demanding customers happy.”

The research is sponsored by the National Science Foundation, a Sloan Research Fellowship, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab.

Engineers create a programmable fiber

Image: Anna Gittelson. Photo by Roni Cnaani.

By Becky Ham | MIT News correspondent

MIT researchers have created the first fiber with digital capabilities, able to sense, store, analyze, and infer activity after being sewn into a shirt.

Yoel Fink, who is a professor of material sciences and electrical engineering, a Research Laboratory of Electronics principal investigator, and the senior author on the study, says digital fibers expand the possibilities for fabrics to uncover the context of hidden patterns in the human body that could be used for physical performance monitoring, medical inference, and early disease detection.

Or, you might someday store your wedding music in the gown you wore on the big day — more on that later.

Fink and his colleagues describe the features of the digital fiber in Nature Communications. Until now, electronic fibers have been analog — carrying a continuous electrical signal — rather than digital, where discrete bits of information can be encoded and processed in 0s and 1s.

Image: Anna Gittelson. Photo by Roni Cnaani.

“This work presents the first realization of a fabric with the ability to store and process data digitally, adding a new information content dimension to textiles and allowing fabrics to be programmed literally,” Fink says.

MIT PhD student Gabriel Loke and MIT postdoc Tural Khudiyev are the lead authors on the paper. Other co-authors MIT postdoc Wei Yan; MIT undergraduates Brian Wang, Stephanie Fu, Ioannis Chatziveroglou, Syamantak Payra, Yorai Shaoul, Johnny Fung, and Itamar Chinn; John Joannopoulos, the Francis Wright Davis Chair Professor of Physics and director of the Institute for Soldier Nanotechnologies at MIT; Harrisburg University of Science and Technology master’s student Pin-Wen Chou; and Rhode Island School of Design Associate Professor Anna Gitelson-Kahn. The fabric work was facilitated by Professor Anais Missakian, who holds the Pevaroff-Cohn Family Endowed Chair in Textiles at RISD.

Memory and more

The new fiber was created by placing hundreds of square silicon microscale digital chips into a preform that was then used to create a polymer fiber. By precisely controlling the polymer flow, the researchers were able to create a fiber with continuous electrical connection between the chips over a length of tens of meters.

A close-up photograph shows the fiber threading through a needle. Image: Pin-Wen Chou. Photo by Pin-Wen Chou.

The fiber itself is thin and flexible and can be passed through a needle, sewn into fabrics, and washed at least 10 times without breaking down. According to Loke, “When you put it into a shirt, you can’t feel it at all. You wouldn’t know it was there.”

Making a digital fiber “opens up different areas of opportunities and actually solves some of the problems of functional fibers,” he says.

For instance, it offers a way to control individual elements within a fiber, from one point at the fiber’s end. “You can think of our fiber as a corridor, and the elements are like rooms, and they each have their own unique digital room numbers,” Loke explains. The research team devised a digital addressing method that allows them to “switch on” the functionality of one element without turning on all the elements.

A digital fiber can also store a lot of information in memory. The researchers were able to write, store, and read information on the fiber, including a 767-kilobit full-color short movie file and a 0.48 megabyte music file. The files can be stored for two months without power.

When they were dreaming up “crazy ideas” for the fiber, Loke says, they thought about applications like a wedding gown that would store digital wedding music within the weave of its fabric, or even writing the story of the fiber’s creation into its components.

Fink notes that the research at MIT was in close collaboration with the textile department at RISD led by Missakian.  Gitelson-Kahn incorporated the digital fibers into a knitted garment sleeve, thus paving the way to creating the first digital garment.

On-body artificial intelligence

The fiber also takes a few steps forward into artificial intelligence by including, within the fiber memory, a neural network of 1,650 connections. After sewing it around the armpit of a shirt, the researchers used the fiber to collect 270 minutes of surface body temperature data from a person wearing the shirt, and analyze how these data corresponded to different physical activities. Trained on these data, the fiber was able to determine with 96 percent accuracy what activity the person wearing it was engaged in.

Adding an AI component to the fiber further increases its possibilities, the researchers say. Fabrics with digital components can collect a lot of information across the body over time, and these “lush data” are perfect for machine learning algorithms, Loke says.

A close-up photograph of the digital fibers on green fabric. Image: Anna Gittelson. Photo by Roni Cnaani.

“This type of fabric could give quantity and quality open-source data for extracting out new body patterns that we did not know about before,” he says.

With this analytic power, the fibers someday could sense and alert people in real-time to health changes like a respiratory decline or an irregular heartbeat, or deliver muscle activation or heart rate data to athletes during training.

The fiber is controlled by a small external device, so the next step will be to design a new chip as a microcontroller that can be connected within the fiber itself.

“When we can do that, we can call it a fiber computer,” Loke says.

This research was supported by the U.S. Army Institute of Soldier Nanotechnology, National Science Foundation, the U.S. Army Research Office, the MIT Sea Grant, and the Defense Threat Reduction Agency.

Slender robotic finger senses buried items

MIT researchers developed a “Digger Finger” robot that digs through granular material, like sand and gravel, and senses the shapes of buried objects. The technology could aid in disarming buried bombs or inspecting underground cables. Image courtesy of the researchers.

By Daniel Ackerman

Over the years, robots have gotten quite good at identifying objects — as long as they’re out in the open.

Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.

MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs.

The research will be presented at the next International Symposium on Experimental Robotics. The study’s lead author is Radhen Patel, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Co-authors include CSAIL PhD student Branden Romero, Harvard University PhD student Nancy Ouyang, and Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in CSAIL and the Department of Brain and Cognitive Sciences.

Seeking to identify objects buried in granular material — sand, gravel, and other types of loosely packed particles — isn’t a brand new quest. Previously, researchers have used technologies that sense the subterranean from above, such as Ground Penetrating Radar or ultrasonic vibrations. But these techniques provide only a hazy view of submerged objects. They might struggle to differentiate rock from bone, for example.

“So, the idea is to make a finger that has a good sense of touch and can distinguish between the various things it’s feeling,” says Adelson. “That would be helpful if you’re trying to find and disable buried bombs, for example.” Making that idea a reality meant clearing a number of hurdles.

The team’s first challenge was a matter of form: The robotic finger had to be slender and sharp-tipped.

In prior work, the researchers had used a tactile sensor called GelSight. The sensor consisted of a clear gel covered with a reflective membrane that deformed when objects pressed against it. Behind the membrane were three colors of LED lights and a camera. The lights shone through the gel and onto the membrane, while the camera collected the membrane’s pattern of reflection. Computer vision algorithms then extracted the 3D shape of the contact area where the soft finger touched the object. The contraption provided an excellent sense of artificial touch, but it was inconveniently bulky.

A closeup photograph of the new robot and a diagram of its parts. Image courtesy of the researchers.

For the Digger Finger, the researchers slimmed down their GelSight sensor in two main ways. First, they changed the shape to be a slender cylinder with a beveled tip. Next, they ditched two-thirds of the LED lights, using a combination of blue LEDs and colored fluorescent paint. “That saved a lot of complexity and space,” says Ouyang. “That’s how we were able to get it into such a compact form.” The final product featured a device whose tactile sensing membrane was about 2 square centimeters, similar to the tip of a finger.

With size sorted out, the researchers turned their attention to motion, mounting the finger on a robot arm and digging through fine-grained sand and coarse-grained rice. Granular media have a tendency to jam when numerous particles become locked in place. That makes it difficult to penetrate. So, the team added vibration to the Digger Finger’s capabilities and put it through a battery of tests.

“We wanted to see how mechanical vibrations aid in digging deeper and getting through jams,” says Patel. “We ran the vibrating motor at different operating voltages, which changes the amplitude and frequency of the vibrations.” They found that rapid vibrations helped “fluidize” the media, clearing jams and allowing for deeper burrowing — though this fluidizing effect was harder to achieve in sand than in rice.

Top row: 3D printed objects used for the object identification experiment. Middle row: Example image data when the Digger Finger directly touches a 3d printed object. Bottom row: Example image data when the Digger Finger touches a 3d printed object that is buried in sand. Image courtesy of the researchers.

They also tested various twisting motions in both the rice and sand. Sometimes, grains of each type of media would get stuck between the Digger-Finger’s tactile membrane and the buried object it was trying to sense. When this happened with rice, the trapped grains were large enough to completely obscure the shape of the object, though the occlusion could usually be cleared with a little robotic wiggling. Trapped sand was harder to clear, though the grains’ small size meant the Digger Finger could still sense the general contours of target object.

Patel says that operators will have to adjust the Digger Finger’s motion pattern for different settings “depending on the type of media and on the size and shape of the grains.” The team plans to keep exploring new motions to optimize the Digger Finger’s ability to navigate various media.

Adelson says the Digger Finger is part of a program extending the domains in which robotic touch can be used. Humans use their fingers amidst complex environments, whether fishing for a key in a pants pocket or feeling for a tumor during surgery. “As we get better at artificial touch, we want to be able to use it in situations when you’re surrounded by all kinds of distracting information,” says Adelson. “We want to be able to distinguish between the stuff that’s important and the stuff that’s not.”

Funding for this research was provided, in part, by the Toyota Research Institute through the Toyota-CSAIL Joint Research Center; the Office of Naval Research; and the Norwegian Research Council.


Q&A: Vivienne Sze on crossing the hardware-software divide for efficient artificial intelligence

Associate professor Vivienne Sze is bringing artificial intelligence applications to smartphones and tiny robots by co-designing energy-efficient hardware and software. Image credits: Lillie Paquette, MIT School of Engineering

Not so long ago, watching a movie on a smartphone seemed impossible. Vivienne Sze was a graduate student at MIT at the time, in the mid 2000s, and she was drawn to the challenge of compressing video to keep image quality high without draining the phone’s battery. The solution she hit upon called for co-designing energy-efficient circuits with energy-efficient algorithms.

Sze would go on to be part of the team that won an Engineering Emmy Award for developing the video compression standards still in use today. Now an associate professor in MIT’s Department of Electrical Engineering and Computer Science, Sze has set her sights on a new milestone: bringing artificial intelligence applications to smartphones and tiny robots.

Her research focuses on designing more-efficient deep neural networks to process video, and more-efficient hardware to run those applications. She recently co-published a book on the topic, and will teach a professional education course on how to design efficient deep learning systems in June.

On April 29, Sze will join Assistant Professor Song Han for an MIT Quest AI Roundtable on the co-design of efficient hardware and software moderated by Aude Oliva, director of MIT Quest Corporate and the MIT director of the MIT-IBM Watson AI Lab. Here, Sze discusses her recent work.

Q: Why do we need low-power AI now?

A: AI applications are moving to smartphones, tiny robots, and internet-connected appliances and other devices with limited power and processing capabilities. The challenge is that AI has high computing requirements. Analyzing sensor and camera data from a self-driving car can consume about 2,500 watts, but the computing budget of a smartphone is just about a single watt. Closing this gap requires rethinking the entire stack, a trend that will define the next decade of AI.

Q: What’s the big deal about running AI on a smartphone?

A: It means that the data processing no longer has to take place in the “cloud,” on racks of warehouse servers. Untethering compute from the cloud allows us to broaden AI’s reach. It gives people in developing countries with limited communication infrastructure access to AI. It also speeds up response time by reducing the lag caused by communicating with distant servers. This is crucial for interactive applications like autonomous navigation and augmented reality, which need to respond instantaneously to changing conditions. Processing data on the device can also protect medical and other sensitive records. Data can be processed right where they’re collected.

Q: What makes modern AI so inefficient?

A: The cornerstone of modern AI — deep neural networks — can require hundreds of millions to billions of calculations — orders of magnitude greater than compressing video on a smartphone. But it’s not just number crunching that makes deep networks energy-intensive — it’s the cost of shuffling data to and from memory to perform these computations. The farther the data have to travel, and the more data there are, the greater the bottleneck.

Q: How are you redesigning AI hardware for greater energy efficiency?

A: We focus on reducing data movement and the amount of data needed for computation. In some deep networks, the same data are used multiple times for different computations. We design specialized hardware to reuse data locally rather than send them off-chip. Storing reused data on-chip makes the process extremely energy-efficient.  

We also optimize the order in which data are processed to maximize their reuse. That’s the key property of the Eyeriss chip that was developed in collaboration with Joel Emer. In our followup work, Eyeriss v2, we made the chip flexible enough to reuse data across a wider range of deep networks. The Eyeriss chip also uses compression to reduce data movement, a common tactic among AI chips. The low-power Navion chip that was developed in collaboration with Sertac Karaman for mapping and navigation applications in robotics uses two to three orders of magnitude less energy than a CPU, in part by using optimizations that reduce the amount of data processed and stored on-chip. 

Q: What changes have you made on the software side to boost efficiency?

A: The more that software aligns with hardware-related performance metrics like energy efficiency, the better we can do. Pruning, for example, is a popular way to remove weights from a deep network to reduce computation costs. But rather than remove weights based on their magnitude, our work on energy-aware pruning suggests you can remove the more energy-intensive weights to improve overall energy consumption. Another method we’ve developed, NetAdapt, automates the process of adapting and optimizing a deep network for a smartphone or other hardware platforms. Our recent followup work, NetAdaptv2, accelerates the optimization process to further boost efficiency.

Q: What low-power AI applications are you working on?

A: I’m exploring autonomous navigation for low-energy robots with Sertac Karaman. I’m also working with Thomas Heldt to develop a low-cost and potentially more effective way of diagnosing and monitoring people with neurodegenerative disorders like Alzheimer’s and Parkinson’s by tracking their eye movements. Eye-movement properties like reaction time could potentially serve as biomarkers for brain function. In the past, eye-movement tracking took place in clinics because of the expensive equipment required. We’ve shown that an ordinary smartphone camera can take measurements from a patient’s home, making data collection easier and less costly. This could help to monitor disease progression and track improvements in clinical drug trials.

Q: Where is low-power AI headed next?

A: Reducing AI’s energy requirements will extend AI to a wider range of embedded devices, extending its reach into tiny robots, smart homes, and medical devices. A key challenge is that efficiency often requires a tradeoff in performance. For wide adoption, it will be important to dig deeper into these different applications to establish the right balance between efficiency and accuracy.

Navigating beneath the Arctic ice

For scientists to understand the role the changing environment in the Arctic Ocean plays in global climate change, there is a need to map the ocean below the ice cover. Image credits: Troy Barnhart, Chief Petty Officer, U.S. Navy

By Mary Beth Gallagher | Department of Mechanical Engineering

There is a lot of activity beneath the vast, lonely expanses of ice and snow in the Arctic. Climate change has dramatically altered the layer of ice that covers much of the Arctic Ocean. Areas of water that used to be covered by a solid ice pack are now covered by thin layers only 3 feet deep. Beneath the ice, a warm layer of water, part of the Beaufort Lens, has changed the makeup of the aquatic environment.    

For scientists to understand the role this changing environment in the Arctic Ocean plays in global climate change, there is a need for mapping the ocean below the ice cover.

A team of MIT engineers and naval officers led by Henrik Schmidt, professor of mechanical and ocean engineering, is trying to understand environmental changes, their impact on acoustic transmission beneath the surface, and how these changes affect navigation and communication for vehicles traveling below the ice.

“Basically, what we want to understand is how does this new Arctic environment brought about by global climate change affect the use of underwater sound for communication, navigation, and sensing?” explains Schmidt.

To answer this question, Schmidt traveled to the Arctic with members of the Laboratory for Autonomous Marine Sensing Systems (LAMSS) including Daniel Goodwin and Bradli Howard, graduate students in the MIT-Woods Hole Oceanographic Institution Joint Program in oceanographic engineering.

With funding from the Office of Naval Research, the team participated in ICEX — or Ice Exercise — 2020, a three-week program hosted by the U.S. Navy, where military personnel, scientists, and engineers work side-by-side executing a variety of research projects and missions.

A strategic waterway

The rapidly changing environment in the Arctic has wide-ranging impacts. In addition to giving researchers more information about the impact of global warming and the effects it has on marine mammals, the thinning ice could potentially open up new shipping lanes and trade routes in areas that were previously untraversable.

Perhaps most crucially for the U.S. Navy, understanding the altered environment also has geopolitical importance.

“If the Arctic environment is changing and we don’t understand it, that could have implications in terms of national security,” says Goodwin.

Several years ago, Schmidt and his colleague Arthur Baggeroer, professor of mechanical and ocean engineering, were among the first to recognize that the warmer waters, part of the Beaufort Lens, coupled with the changing ice composition, impacted how sound traveled in the water.

To successfully navigate throughout the Arctic, the U.S. Navy and other entities in the region need to understand how these changes in sound propagation affect a vehicle’s ability to communicate and navigate through the water.

Using an unpiloted, autonomous underwater vehicle (AUV) built by General Dynamics-Mission Systems (GD-MS), and a system of sensors rigged on buoys developed by the Woods Hole Oceanographic Institution, Schmidt and his team, joined by Dan McDonald and Josiah DeLange of GD-MS, set out to demonstrate a new integrated acoustic communication and navigation concept.

The research team prepares to deploy an autonomous underwater vehicle built by General Dynamics Mission Systems to test their navigational concept. Image credits: Daniel Goodwin, LCDR, USN

The framework, which was also supported and developed by LAMSS members Supun Randeni, EeShan Bhatt, Rui Chen, and Oscar Viquez, as well as LAMSS alumnus Toby Schneider of GobySoft LLC, would allow vehicles to travel through the water with GPS-level accuracy while employing oceanographic sensors for data collection.

“In order to prove that you can use this navigational concept in the Arctic, we have to first ensure we fully understand the environment that we’re operating in,” adds Goodwin.

Understanding the environment below

After arriving at the Arctic Submarine Lab’s ice camp last spring, the research team deployed a number of conductivity-temperature-depth probes to gather data about the aquatic environment in the Arctic.

“By using temperature and salinity as a function of depth, we calculate the sound speed profile. This helps us understand if the AUV’s location is good for communication or bad,” says Howard, who was responsible for monitoring environmental changes to the water column throughout ICEX.

A team including professor Henrik Schmidt, MIT-WHOI Joint Program graduate students Daniel Goodwin and Bradli Howard, members of the Laboratory for Autonomous Marine Sensing Systems, and the Arctic Submarine Lab, traveled to the Arctic in March 2020 as part of the ICEX 2020, a three-week program hosted by the U.S. Navy, where military personnel, scientists and engineers work side-by-side executing a variety of research projects and missions. Image credits: MIke Demello, Artict Submarine Laboratory

Because of the way sound bends in water, through a concept known as Snell’s Law, sine-like pressure waves collect in some parts of the water column and disperse in others. Understanding the propagation trajectories is key to predicting good and bad locations for the AUV to operate.  

To map the areas of the water with optimal acoustic properties, Howard modified the traditional signal-to-noise-ratio (SNR) by using a metric known as the multi-path penalty (MPP), which penalizes areas where the AUV receives echoes of the messages. As a result, the vehicle prioritizes operations in areas with less reverb.

These data allowed the team to identify exactly where the vehicle should be positioned in the water column for optimal communications which results in accurate navigation.

While Howard gathered data on how the characteristics of the water impact acoustics, Goodwin focused on how sound is projected and reflected off the ever-changing ice on the surface.

To get these data, the AUV was outfitted with a device that measured the motion of the vehicle relative to the ice above. That sound was picked up by several receivers attached to moorings hanging from the ice.

The data from the vehicle and the receivers were then used by the researchers to compute exactly where the vehicle was at a given time. This location information, together with the data Howard gathered on the acoustic environment in the water, offer a new navigational concept for vehicles traveling in the Arctic Sea.

Protecting the Arctic

After a series of setbacks and challenges due to the unforgiving conditions in the Arctic, the team was able to successfully prove their navigational concept worked. Thanks to the team’s efforts, naval operations and future trade vessels may be able to take advantage of the changing conditions in the Arctic to maximize navigational accuracy and improve underwater communications.

After a series of setbacks and challenges due to the unforgiving conditions in the Arctic, the team was able to successfully prove their navigational concept worked. Image credits: Dan McDonald, General Dynamics Mission Systems

“Our work could improve the ability for the U.S. Navy to safely and effectively operate submarines under the ice for extended periods,” Howard says.

Howard acknowledges that in addition to the changes in physical climate, the geopolitical climate continues to change. This only strengthens the need for improved navigation in the Arctic.

“The U.S. Navy’s goal is to preserve peace and protect global trade by ensuring freedom of navigation throughout the world’s oceans,” she adds. “The navigational concept we proved during ICEX will serve to help the Navy in that mission.”

A robot that senses hidden objects

MIT researchers developed a picking robot that combines vision with radio frequency (RF) sensing to find and grasps objects, even if they’re hidden from view. The technology could aid fulfilment in e-commerce warehouses. Credits: Courtesy of the researchers

By Daniel Ackerman | MIT News Office

In recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says.

The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.

The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper’s lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.

As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That’s in part because robots struggle to locate and grasp objects in such a crowded environment. “Perception and picking are two roadblocks in the industry today,” says Rodriguez. Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls.

But radio waves can.

For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.

The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.

“RF is such a different sensing modality than vision,” says Rodriguez. “It would be a mistake not to explore what RF can do.”

RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they’re fully blocked from the camera’s view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot’s wrist. The RF reader stands independent of the robot and relays tracking information to the robot’s control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot’s decision making was one of the biggest challenges the researchers faced.

“The robot has to decide, at each point in time, which of these streams is more important to think about,” says Boroushaki. “It’s not just eye-hand coordination, it’s RF-eye-hand coordination. So, the problem gets very complicated.”

The robot initiates the seek-and-pluck process by pinging the target object’s RF tag for a sense of its whereabouts. “It starts by using RF to focus the attention of vision,” says Adib. “Then you use vision to navigate fine maneuvers.” The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren’s source.

With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot’s decision making.

RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to “declutter” its environment — removing packing materials and other obstacles in its way in order to access the target. Rodriguez says this demonstrates RF Grasp’s “unfair advantage” over robots without penetrative RF sensing. “It has this guidance that other systems simply don’t have.”

RF Grasp could one day perform fulfilment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item’s identity without the need to manipulate the item, expose its barcode, then scan it. “RF has the potential to improve some of those limitations in industry, especially in perception and localization,” says Rodriguez.

Adib also envisions potential home applications for the robot, like locating the right Allen wrench to assemble your Ikea chair. “Or you could imagine the robot finding lost items. It’s like a super-Roomba that goes and retrieves my keys, wherever the heck I put them.”

The research is sponsored by the National Science Foundation, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).

Researchers’ algorithm designs soft robots that sense

MIT researchers have developed a deep learning neural network to aid the design of soft-bodied robots, such as these iterations of a robotic elephant. Image: courtesy of the researchers

By Daniel Ackerman | MIT News Office

There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.

MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. “The system not only learns a given task, but also how to best design the robot to solve that task,” says Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”

The research will be presented during April’s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.

Creating soft robots that complete real-world tasks has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots’ finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots are not so tractable.

Soft-bodied robots are flexible and pliant — they generally feel more like a bouncy ball than a bowling ball. “The main problem with soft robots is that they are infinitely dimensional,” says Spielberg. “Any point on a soft-bodied robot can, in theory, deform in any way possible.” That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot’s position and feed that information back into the robot’s control program. But the researchers wanted to create a soft robot untethered from external aid.

“You can’t put an infinite number of sensors on the robot itself,” says Spielberg. “So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?” The team turned to deep learning for an answer.

The researchers developed a novel neural network architecture that both optimizes sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot’s body into regions called “particles.” Each particle’s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network “learns” the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks’ subsequent trials.

By optimizing the most important particles, the network also suggests where sensors should be placed on the robot to ensure efficient performance. For example, in a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and around the fingers, where precisely controlled interactions with the environment are vital to the robot’s ability to manipulate objects. While that may seem obvious, it turns out the algorithm vastly outperformed humans’ intuition on where to site the sensors.

The researchers pitted their algorithm against a series of expert predictions. For three different soft robot layouts, the team asked roboticists to manually select where sensors should be placed to enable the efficient completion of tasks like grasping various objects. Then they ran simulations comparing the human-sensorized robots to the algorithm-sensorized robots. And the results weren’t close. “Our model vastly outperformed humans for each task, even though I looked at some of the robot bodies and felt very confident on where the sensors should go,” says Amini. “It turns out there are a lot more subtleties in this problem than we initially expected.”

Spielberg says their work could help to automate the process of robot design. In addition to developing algorithms to control a robot’s movements, “we also need to think about how we’re going to sensorize these robots, and how that will interplay with other components of that system,” he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. “That’s something where you need a very robust, well-optimized sense of touch,” says Spielberg. “So, there’s potential for immediate impact.”

“Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,” says Rus. “The sensors are an important aspect of the process, as they enable the soft robot to “see” and understand the world and its relationship with the world.”

This research was funded, in part, by the National Science Foundation and the Fannie and John Hertz Foundation.

System detects errors when medication is self-administered

The new technology pairs wireless sensing with artificial intelligence to determine when a patient is using an insulin pen or inhaler, and it flags potential errors in the patient’s administration method. | Image: courtery of the researchers

From swallowing pills to injecting insulin, patients frequently administer their own medication. But they don’t always get it right. Improper adherence to doctors’ orders is commonplace, accounting for thousands of deaths and billions of dollars in medical costs annually. MIT researchers have developed a system to reduce those numbers for some types of medications.

The new technology pairs wireless sensing with artificial intelligence to determine when a patient is using an insulin pen or inhaler, and flags potential errors in the patient’s administration method. “Some past work reports that up to 70% of patients do not take their insulin as prescribed, and many patients do not use inhalers properly,” says Dina Katabi, the Andrew and Erna Viteri Professor at MIT, whose research group has developed the new solution. The researchers say the system, which can be installed in a home, could alert patients and caregivers to medication errors and potentially reduce unnecessary hospital visits.

The research appears today in the journal Nature Medicine. The study’s lead authors are Mingmin Zhao, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and Kreshnik Hoti, a former visiting scientist at MIT and current faculty member at the University of Prishtina in Kosovo. Other co-authors include Hao Wang, a former CSAIL postdoc and current faculty member at Rutgers University, and Aniruddh Raghu, a CSAIL PhD student.

Some common drugs entail intricate delivery mechanisms. “For example, insulin pens require priming to make sure there are no air bubbles inside. And after injection, you have to hold for 10 seconds,” says Zhao. “All those little steps are necessary to properly deliver the drug to its active site.” Each step also presents opportunity for errors, especially when there’s no pharmacist present to offer corrective tips. Patients might not even realize when they make a mistake — so Zhao’s team designed an automated system that can.

Their system can be broken down into three broad steps. First, a sensor tracks a patient’s movements within a 10-meter radius, using radio waves that reflect off their body. Next, artificial intelligence scours the reflected signals for signs of a patient self-administering an inhaler or insulin pen. Finally, the system alerts the patient or their health care provider when it detects an error in the patient’s self-administration.

Wireless sensing technology could help improve patients’ technique with inhalers and insulin pens. | Image: Christine Daniloff, MIT

The researchers adapted their sensing method from a wireless technology they’d previously used to monitor people’s sleeping positions. It starts with a wall-mounted device that emits very low-power radio waves. When someone moves, they modulate the signal and reflect it back to the device’s sensor. Each unique movement yields a corresponding pattern of modulated radio waves that the device can decode. “One nice thing about this system is that it doesn’t require the patient to wear any sensors,” says Zhao. “It can even work through occlusions, similar to how you can access your Wi-Fi when you’re in a different room from your router.”

The new sensor sits in the background at home, like a Wi-Fi router, and uses artificial intelligence to interpret the modulated radio waves. The team developed a neural network to key in on patterns indicating the use of an inhaler or insulin pen. They trained the network to learn those patterns by performing example movements, some relevant (e.g. using an inhaler) and some not (e.g. eating). Through repetition and reinforcement, the network successfully detected 96 percent of insulin pen administrations and 99 percent of inhaler uses.

Once it mastered the art of detection, the network also proved useful for correction. Every proper medicine administration follows a similar sequence — picking up the insulin pen, priming it, injecting, etc. So, the system can flag anomalies in any particular step. For example, the network can recognize if a patient holds down their insulin pen for five seconds instead of the prescribed 10 seconds. The system can then relay that information to the patient or directly to their doctor, so they can fix their technique.

“By breaking it down into these steps, we can not only see how frequently the patient is using their device, but also assess their administration technique to see how well they’re doing,” says Zhao.

The researchers say a key feature of their radio wave-based system is its noninvasiveness. “An alternative way to solve this problem is by installing cameras,” says Zhao. “But using a wireless signal is much less intrusive. It doesn’t show peoples’ appearance.”

He adds that their framework could be adapted to medications beyond inhalers and insulin pens — all it would take is retraining the neural network to recognize the appropriate sequence of movements. Zhao says that “with this type of sensing technology at home, we could detect issues early on, so the person can see a doctor before the problem is exacerbated.”

Fostering ethical thinking in computing

A case studies series from the Social and Ethical Responsibilities of Computing program at MIT delves into a range of topics, from social and ethical implications of computing technologies and the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings to the biases of risk prediction algorithms in the criminal justice system and the politicization of data collection.

By Terri Park | MIT Schwarzman College of Computing

Traditional computer scientists and engineers are trained to develop solutions for specific needs, but aren’t always trained to consider their broader implications. Each new technology generation, and particularly the rise of artificial intelligence, leads to new kinds of systems, new ways of creating tools, and new forms of data, for which norms, rules, and laws frequently have yet to catch up. The kinds of impact that such innovations have in the world has often not been apparent until many years later.

As part of the efforts in Social and Ethical Responsibilities of Computing (SERC) within the MIT Stephen A. Schwarzman College of Computing, a new case studies series examines social, ethical, and policy challenges of present-day efforts in computing with the aim of facilitating the development of responsible “habits of mind and action” for those who create and deploy computing technologies.

“Advances in computing have undeniably changed much of how we live and work. Understanding and incorporating broader social context is becoming ever more critical,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing. “This case study series is designed to be a basis for discussions in the classroom and beyond, regarding social, ethical, economic, and other implications so that students and researchers can pursue the development of technology across domains in a holistic manner that addresses these important issues.”

A modular system

By design, the case studies are brief and modular to allow users to mix and match the content to fit a variety of pedagogical needs. Series editors David Kaiser and Julie Shah, who are the associate deans for SERC, structured the cases primarily to be appropriate for undergraduate instruction across a range of classes and fields of study.

“Our goal was to provide a seamless way for instructors to integrate cases into an existing course or cluster several cases together to support a broader module within a course. They might also use the cases as a starting point to design new courses that focus squarely on themes of social and ethical responsibilities of computing,” says Kaiser, the Germeshausen Professor of the History of Science and professor of physics.

Shah, an associate professor of aeronautics and astronautics and a roboticist who designs systems in which humans and machines operate side by side, expects that the cases will also be of interest to those outside of academia, including computing professionals, policy specialists, and general readers. In curating the series, Shah says that “we interpret ‘social and ethical responsibilities of computing’ broadly to focus on perspectives of people who are affected by various technologies, as well as focus on perspectives of designers and engineers.”

The cases are not limited to a particular format and can take shape in various forms — from a magazine-like feature article or Socratic dialogues to choose-your-own-adventure stories or role-playing games grounded in empirical research. Each case study is brief, but includes accompanying notes and references to facilitate more in-depth exploration of a given topic. Multimedia projects will also be considered. “The main goal is to present important material — based on original research — in engaging ways to broad audiences of non-specialists,” says Kaiser.

The SERC case studies are specially commissioned and written by scholars who conduct research centrally on the subject of the piece. Kaiser and Shah approached researchers from within MIT as well as from other academic institutions to bring in a mix of diverse voices on a spectrum of topics. Some cases focus on a particular technology or on trends across platforms, while others assess social, historical, philosophical, legal, and cultural facets that are relevant for thinking critically about current efforts in computing and data sciences.

The cases published in the inaugural issue place readers in various settings that challenge them to consider the social and ethical implications of computing technologies, such as how social media services and surveillance tools are built; the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings; the biases of risk prediction algorithms in the criminal justice system; and the politicization of data collection.

“Most of us agree that we want computing to work for social good, but which good? Whose good? Whose needs and values and worldviews are prioritized and whose are overlooked?” says Catherine D’Ignazio, an assistant professor of urban science and planning and director of the Data + Feminism Lab at MIT.

D’Ignazio’s case for the series, co-authored with Lauren Klein, an associate professor in the English and Quantitative Theory and Methods departments at Emory University, introduces readers to the idea that while data are useful, they are not always neutral. “These case studies help us understand the unequal histories that shape our technological systems as well as study their disparate outcomes and effects. They are an exciting step towards holistic, sociotechnical thinking and making.”

Rigorously reviewed

Kaiser and Shah formed an editorial board composed of 55 faculty members and senior researchers associated with 19 departments, labs, and centers at MIT, and instituted a rigorous peer-review policy model commonly adopted by specialized journals. Members of the editorial board will also help commission topics for new cases and help identify authors for a given topic.

For each submission, the series editors collect four to six peer reviews, with reviewers mostly drawn from the editorial board. For each case, half the reviewers come from fields in computing and data sciences and half from fields in the humanities, arts, and social sciences, to ensure balance of topics and presentation within a given case study and across the series.

“Over the past two decades I’ve become a bit jaded when it comes to the academic review process, and so I was particularly heartened to see such care and thought put into all of the reviews,” says Hany Farid, a professor at the University of California at Berkeley with a joint appointment in the Department of Electrical Engineering and Computer Sciences and the School of Information. “The constructive review process made our case study significantly stronger.”

Farid’s case, “The Dangers of Risk Prediction in the Criminal Justice System,” which he penned with Julia Dressel, recently a student of computer science at Dartmouth College, is one of the four commissioned pieces featured in the inaugural issue.

Cases are additionally reviewed by undergraduate volunteers, who help the series editors gauge each submission for balance, accessibility for students in multiple fields of study, and possibilities for adoption in specific courses. The students also work with them to create original homework problems and active learning projects to accompany each case study, to further facilitate adoption of the original materials across a range of existing undergraduate subjects.

“I volunteered to work with this group because I believe that it’s incredibly important for those working in computer science to include thinking about ethics not as an afterthought, but integrated into every step and decision that is made, says Annie Snyder, a mathematical economics sophomore and a member of the MIT Schwarzman College of Computing’s Undergraduate Advisory Group. “While this is a massive issue to take on, this project is an amazing opportunity to start building an ethical culture amongst the incredibly talented students at MIT who will hopefully carry it forward into their own projects and workplace.”

New sets of case studies, produced with support from the MIT Press’ Open Publishing Services program, will be published twice a year via the Knowledge Futures Group’s PubPub platform. The SERC case studies are made available for free on an open-access basis, under Creative Commons licensing terms. Authors retain copyright, enabling them to reuse and republish their work in more specialized scholarly publications.

“It was important to us to approach this project in an inclusive way and lower the barrier for people to be able to access this content. These are complex issues that we need to deal with, and we hope that by making the cases widely available, more people will engage in social and ethical considerations as they’re studying and developing computing technologies,” says Shah.

Researchers introduce a new generation of tiny, agile drones

Insects’ remarkable acrobatic traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Image: courtesy of Kevin Yufeng Chen

By Daniel Ackerman

If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.

Chen, a member of the Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, has developed insect-sized drones with unprecedented dexterity and resilience. The aerial robots are powered by a new class of soft actuator, which allows them to withstand the physical travails of real-world flight. Chen hopes the robots could one day aid humans by pollinating crops or performing machinery inspections in cramped spaces.

Chen’s work appears this month in the journal IEEE Transactions on Robotics. His co-authors include MIT PhD student Zhijian Ren, Harvard University PhD student Siyi Xu, and City University of Hong Kong roboticist Pakpong Chirarattananon.

Typically, drones require wide open spaces because they’re neither nimble enough to navigate confined spaces nor robust enough to withstand collisions in a crowd. “If we look at most drones today, they’re usually quite big,” says Chen. “Most of their applications involve flying outdoors. The question is: Can you create insect-scale robots that can move around in very complex, cluttered spaces?”

According to Chen, “The challenge of building small aerial robots is immense.” Pint-sized drones require a fundamentally different construction from larger ones. Large drones are usually powered by motors, but motors lose efficiency as you shrink them. So, Chen says, for insect-like robots “you need to look for alternatives.”

The principal alternative until now has been employing a small, rigid actuator built from piezoelectric ceramic materials. While piezoelectric ceramics allowed the first generation of tiny robots to take flight, they’re quite fragile. And that’s a problem when you’re building a robot to mimic an insect — foraging bumblebees endure a collision about once every second.

Chen designed a more resilient tiny drone using soft actuators instead of hard, fragile ones. The soft actuators are made of thin rubber cylinders coated in carbon nanotubes. When voltage is applied to the carbon nanotubes, they produce an electrostatic force that squeezes and elongates the rubber cylinder. Repeated elongation and contraction causes the drone’s wings to beat — fast.

Chen’s actuators can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” says Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.

“Achieving flight with a centimeter-scale robot is always an impressive feat,” says Farrell Helbling, an assistant professor of electrical and computer engineering at Cornell University, who was not involved in the research. “Because of the soft actuators’ inherent compliance, the robot can safely run into obstacles without greatly inhibiting flight. This feature is well-suited for flight in cluttered, dynamic environments and could be very useful for any number of real-world applications.”

Helbling adds that a key step toward those applications will be untethering the robots from a wired power source, which is currently required by the actuators’ high operating voltage. “I’m excited to see how the authors will reduce operating voltage so that they may one day be able to achieve untethered flight in real-world environments.”

Building insect-like robots can provide a window into the biology and physics of insect flight, a longstanding avenue of inquiry for researchers. Chen’s work addresses these questions through a kind of reverse engineering. “If you want to learn how insects fly, it is very instructive to build a scale robot model,” he says. “You can perturb a few things and see how it affects the kinematics or how the fluid forces change. That will help you understand how those things fly.” But Chen aims to do more than add to entomology textbooks. His drones can also be useful in industry and agriculture.

Chen says his mini-aerialists could navigate complex machinery to ensure safety and functionality. “Think about the inspection of a turbine engine. You’d want a drone to move around [an enclosed space] with a small camera to check for cracks on the turbine plates.”

Other potential applications include artificial pollination of crops or completing search-and-rescue missions following a disaster. “All those things can be very challenging for existing large-scale robots,” says Chen. Sometimes, bigger isn’t better.

Page 4 of 12
1 2 3 4 5 6 12