All posts by MIT News

Page 4 of 12
1 2 3 4 5 6 12

A robot that finds lost items

Researchers at MIT have developed a fully-integrated robotic arm that fuses visual data from a camera and radio frequency (RF) information from an antenna to find and retrieve objects, even when they are buried under a pile and fully out of view. Credits: Courtesy of the researchers

By Adam Zewe | MIT News Office

A busy commuter is ready to walk out the door, only to realize they’ve misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys.

Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.

The RFusion prototype the researchers developed relies on RFID tags, which are cheap, battery-less tags that can be stuck to an item and reflect signals sent by an antenna. Because RF signals can travel through most surfaces (like the mound of dirty laundry that may be obscuring the keys), RFusion is able to locate a tagged item within a pile.

Using machine learning, the robotic arm automatically zeroes-in on the object’s exact location, moves the items on top of it, grasps the object, and verifies that it picked up the right thing. The camera, antenna, robotic arm, and AI are fully integrated, so RFusion can work in any environment without requiring a special set up.

In this video still, the robotic arm is looking for keys hidden underneath items. Credits: Courtesy of the researchers

While finding lost keys is helpful, RFusion could have many broader applications in the future, like sorting through piles to fulfill orders in a warehouse, identifying and installing components in an auto manufacturing plant, or helping an elderly individual perform daily tasks in the home, though the current prototype isn’t quite fast enough yet for these uses.

“This idea of being able to find items in a chaotic world is an open problem that we’ve been working on for a few years. Having robots that are able to search for things under a pile is a growing need in industry today. Right now, you can think of this as a Roomba on steroids, but in the near term, this could have a lot of applications in manufacturing and warehouse environments,” said senior author Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab.

Co-authors include research assistant Tara Boroushaki, the lead author; electrical engineering and computer science graduate student Isaac Perper; research associate Mergen Nachin; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. The research will be presented at the Association for Computing Machinery Conference on Embedded Networked Senor Systems next month.

Sending signals

RFusion begins searching for an object using its antenna, which bounces signals off the RFID tag (like sunlight being reflected off a mirror) to identify a spherical area in which the tag is located. It combines that sphere with the camera input, which narrows down the object’s location. For instance, the item can’t be located on an area of a table that is empty.

But once the robot has a general idea of where the item is, it would need to swing its arm widely around the room taking additional measurements to come up with the exact location, which is slow and inefficient.

The researchers used reinforcement learning to train a neural network that can optimize the robot’s trajectory to the object. In reinforcement learning, the algorithm is trained through trial and error with a reward system.

“This is also how our brain learns. We get rewarded from our teachers, from our parents, from a computer game, etc. The same thing happens in reinforcement learning. We let the agent make mistakes or do something right and then we punish or reward the network. This is how the network learns something that is really hard for it to model,” Boroushaki explains.

In the case of RFusion, the optimization algorithm was rewarded when it limited the number of moves it had to make to localize the item and the distance it had to travel to pick it up.

Once the system identifies the exact right spot, the neural network uses combined RF and visual information to predict how the robotic arm should grasp the object, including the angle of the hand and the width of the gripper, and whether it must remove other items first. It also scans the item’s tag one last time to make sure it picked up the right object.

Cutting through clutter

The researchers tested RFusion in several different environments. They buried a keychain in a box full of clutter and hid a remote control under a pile of items on a couch.

But if they fed all the camera data and RF measurements to the reinforcement learning algorithm, it would have overwhelmed the system. So, drawing on the method a GPS uses to consolidate data from satellites, they summarized the RF measurements and limited the visual data to the area right in front of the robot.

Their approach worked well — RFusion had a 96 percent success rate when retrieving objects that were fully hidden under a pile.

“We let the agent make mistakes or do something right and then we punish or reward the network. This is how the network learns something that is really hard for it to model,” co-author Tara Boroushaki, pictured here, explains. Credits: Courtesy of the researchers

“Sometimes, if you only rely on RF measurements, there is going to be an outlier, and if you rely only on vision, there is sometimes going to be a mistake from the camera. But if you combine them, they are going to correct each other. That is what made the system so robust,” Boroushaki says.

In the future, the researchers hope to increase the speed of the system so it can move smoothly, rather than stopping periodically to take measurements. This would enable RFusion to be deployed in a fast-paced manufacturing or warehouse setting.

Beyond its potential industrial uses, a system like this could even be incorporated into future smart homes to assist people with any number of household tasks, Boroushaki says.

“Every year, billions of RFID tags are used to identify objects in today’s complex supply chains, including clothing and lots of other consumer goods. The RFusion approach points the way to autonomous robots that can dig through a pile of mixed items and sort them out using the data stored in the RFID tags, much more efficiently than having to inspect each item individually, especially when the items look similar to a computer vision system,” says Matthew S. Reynolds, CoMotion Presidential Innovation Fellow and associate professor of electrical and computer engineering at the University of Washington, who was not involved in the research. “The RFusion approach is a great step forward for robotics operating in complex supply chains where identifying and ‘picking’ the right item quickly and accurately is the key to getting orders fulfilled on time and keeping demanding customers happy.”

The research is sponsored by the National Science Foundation, a Sloan Research Fellowship, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab.

Engineers create a programmable fiber

Image: Anna Gittelson. Photo by Roni Cnaani.

By Becky Ham | MIT News correspondent

MIT researchers have created the first fiber with digital capabilities, able to sense, store, analyze, and infer activity after being sewn into a shirt.

Yoel Fink, who is a professor of material sciences and electrical engineering, a Research Laboratory of Electronics principal investigator, and the senior author on the study, says digital fibers expand the possibilities for fabrics to uncover the context of hidden patterns in the human body that could be used for physical performance monitoring, medical inference, and early disease detection.

Or, you might someday store your wedding music in the gown you wore on the big day — more on that later.

Fink and his colleagues describe the features of the digital fiber in Nature Communications. Until now, electronic fibers have been analog — carrying a continuous electrical signal — rather than digital, where discrete bits of information can be encoded and processed in 0s and 1s.

Image: Anna Gittelson. Photo by Roni Cnaani.

“This work presents the first realization of a fabric with the ability to store and process data digitally, adding a new information content dimension to textiles and allowing fabrics to be programmed literally,” Fink says.

MIT PhD student Gabriel Loke and MIT postdoc Tural Khudiyev are the lead authors on the paper. Other co-authors MIT postdoc Wei Yan; MIT undergraduates Brian Wang, Stephanie Fu, Ioannis Chatziveroglou, Syamantak Payra, Yorai Shaoul, Johnny Fung, and Itamar Chinn; John Joannopoulos, the Francis Wright Davis Chair Professor of Physics and director of the Institute for Soldier Nanotechnologies at MIT; Harrisburg University of Science and Technology master’s student Pin-Wen Chou; and Rhode Island School of Design Associate Professor Anna Gitelson-Kahn. The fabric work was facilitated by Professor Anais Missakian, who holds the Pevaroff-Cohn Family Endowed Chair in Textiles at RISD.

Memory and more

The new fiber was created by placing hundreds of square silicon microscale digital chips into a preform that was then used to create a polymer fiber. By precisely controlling the polymer flow, the researchers were able to create a fiber with continuous electrical connection between the chips over a length of tens of meters.

A close-up photograph shows the fiber threading through a needle. Image: Pin-Wen Chou. Photo by Pin-Wen Chou.

The fiber itself is thin and flexible and can be passed through a needle, sewn into fabrics, and washed at least 10 times without breaking down. According to Loke, “When you put it into a shirt, you can’t feel it at all. You wouldn’t know it was there.”

Making a digital fiber “opens up different areas of opportunities and actually solves some of the problems of functional fibers,” he says.

For instance, it offers a way to control individual elements within a fiber, from one point at the fiber’s end. “You can think of our fiber as a corridor, and the elements are like rooms, and they each have their own unique digital room numbers,” Loke explains. The research team devised a digital addressing method that allows them to “switch on” the functionality of one element without turning on all the elements.

A digital fiber can also store a lot of information in memory. The researchers were able to write, store, and read information on the fiber, including a 767-kilobit full-color short movie file and a 0.48 megabyte music file. The files can be stored for two months without power.

When they were dreaming up “crazy ideas” for the fiber, Loke says, they thought about applications like a wedding gown that would store digital wedding music within the weave of its fabric, or even writing the story of the fiber’s creation into its components.

Fink notes that the research at MIT was in close collaboration with the textile department at RISD led by Missakian.  Gitelson-Kahn incorporated the digital fibers into a knitted garment sleeve, thus paving the way to creating the first digital garment.

On-body artificial intelligence

The fiber also takes a few steps forward into artificial intelligence by including, within the fiber memory, a neural network of 1,650 connections. After sewing it around the armpit of a shirt, the researchers used the fiber to collect 270 minutes of surface body temperature data from a person wearing the shirt, and analyze how these data corresponded to different physical activities. Trained on these data, the fiber was able to determine with 96 percent accuracy what activity the person wearing it was engaged in.

Adding an AI component to the fiber further increases its possibilities, the researchers say. Fabrics with digital components can collect a lot of information across the body over time, and these “lush data” are perfect for machine learning algorithms, Loke says.

A close-up photograph of the digital fibers on green fabric. Image: Anna Gittelson. Photo by Roni Cnaani.

“This type of fabric could give quantity and quality open-source data for extracting out new body patterns that we did not know about before,” he says.

With this analytic power, the fibers someday could sense and alert people in real-time to health changes like a respiratory decline or an irregular heartbeat, or deliver muscle activation or heart rate data to athletes during training.

The fiber is controlled by a small external device, so the next step will be to design a new chip as a microcontroller that can be connected within the fiber itself.

“When we can do that, we can call it a fiber computer,” Loke says.

This research was supported by the U.S. Army Institute of Soldier Nanotechnology, National Science Foundation, the U.S. Army Research Office, the MIT Sea Grant, and the Defense Threat Reduction Agency.

Slender robotic finger senses buried items

MIT researchers developed a “Digger Finger” robot that digs through granular material, like sand and gravel, and senses the shapes of buried objects. The technology could aid in disarming buried bombs or inspecting underground cables. Image courtesy of the researchers.

By Daniel Ackerman

Over the years, robots have gotten quite good at identifying objects — as long as they’re out in the open.

Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.

MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs.

The research will be presented at the next International Symposium on Experimental Robotics. The study’s lead author is Radhen Patel, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Co-authors include CSAIL PhD student Branden Romero, Harvard University PhD student Nancy Ouyang, and Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in CSAIL and the Department of Brain and Cognitive Sciences.

Seeking to identify objects buried in granular material — sand, gravel, and other types of loosely packed particles — isn’t a brand new quest. Previously, researchers have used technologies that sense the subterranean from above, such as Ground Penetrating Radar or ultrasonic vibrations. But these techniques provide only a hazy view of submerged objects. They might struggle to differentiate rock from bone, for example.

“So, the idea is to make a finger that has a good sense of touch and can distinguish between the various things it’s feeling,” says Adelson. “That would be helpful if you’re trying to find and disable buried bombs, for example.” Making that idea a reality meant clearing a number of hurdles.

The team’s first challenge was a matter of form: The robotic finger had to be slender and sharp-tipped.

In prior work, the researchers had used a tactile sensor called GelSight. The sensor consisted of a clear gel covered with a reflective membrane that deformed when objects pressed against it. Behind the membrane were three colors of LED lights and a camera. The lights shone through the gel and onto the membrane, while the camera collected the membrane’s pattern of reflection. Computer vision algorithms then extracted the 3D shape of the contact area where the soft finger touched the object. The contraption provided an excellent sense of artificial touch, but it was inconveniently bulky.

A closeup photograph of the new robot and a diagram of its parts. Image courtesy of the researchers.

For the Digger Finger, the researchers slimmed down their GelSight sensor in two main ways. First, they changed the shape to be a slender cylinder with a beveled tip. Next, they ditched two-thirds of the LED lights, using a combination of blue LEDs and colored fluorescent paint. “That saved a lot of complexity and space,” says Ouyang. “That’s how we were able to get it into such a compact form.” The final product featured a device whose tactile sensing membrane was about 2 square centimeters, similar to the tip of a finger.

With size sorted out, the researchers turned their attention to motion, mounting the finger on a robot arm and digging through fine-grained sand and coarse-grained rice. Granular media have a tendency to jam when numerous particles become locked in place. That makes it difficult to penetrate. So, the team added vibration to the Digger Finger’s capabilities and put it through a battery of tests.

“We wanted to see how mechanical vibrations aid in digging deeper and getting through jams,” says Patel. “We ran the vibrating motor at different operating voltages, which changes the amplitude and frequency of the vibrations.” They found that rapid vibrations helped “fluidize” the media, clearing jams and allowing for deeper burrowing — though this fluidizing effect was harder to achieve in sand than in rice.

Top row: 3D printed objects used for the object identification experiment. Middle row: Example image data when the Digger Finger directly touches a 3d printed object. Bottom row: Example image data when the Digger Finger touches a 3d printed object that is buried in sand. Image courtesy of the researchers.

They also tested various twisting motions in both the rice and sand. Sometimes, grains of each type of media would get stuck between the Digger-Finger’s tactile membrane and the buried object it was trying to sense. When this happened with rice, the trapped grains were large enough to completely obscure the shape of the object, though the occlusion could usually be cleared with a little robotic wiggling. Trapped sand was harder to clear, though the grains’ small size meant the Digger Finger could still sense the general contours of target object.

Patel says that operators will have to adjust the Digger Finger’s motion pattern for different settings “depending on the type of media and on the size and shape of the grains.” The team plans to keep exploring new motions to optimize the Digger Finger’s ability to navigate various media.

Adelson says the Digger Finger is part of a program extending the domains in which robotic touch can be used. Humans use their fingers amidst complex environments, whether fishing for a key in a pants pocket or feeling for a tumor during surgery. “As we get better at artificial touch, we want to be able to use it in situations when you’re surrounded by all kinds of distracting information,” says Adelson. “We want to be able to distinguish between the stuff that’s important and the stuff that’s not.”

Funding for this research was provided, in part, by the Toyota Research Institute through the Toyota-CSAIL Joint Research Center; the Office of Naval Research; and the Norwegian Research Council.


Q&A: Vivienne Sze on crossing the hardware-software divide for efficient artificial intelligence

Associate professor Vivienne Sze is bringing artificial intelligence applications to smartphones and tiny robots by co-designing energy-efficient hardware and software. Image credits: Lillie Paquette, MIT School of Engineering

Not so long ago, watching a movie on a smartphone seemed impossible. Vivienne Sze was a graduate student at MIT at the time, in the mid 2000s, and she was drawn to the challenge of compressing video to keep image quality high without draining the phone’s battery. The solution she hit upon called for co-designing energy-efficient circuits with energy-efficient algorithms.

Sze would go on to be part of the team that won an Engineering Emmy Award for developing the video compression standards still in use today. Now an associate professor in MIT’s Department of Electrical Engineering and Computer Science, Sze has set her sights on a new milestone: bringing artificial intelligence applications to smartphones and tiny robots.

Her research focuses on designing more-efficient deep neural networks to process video, and more-efficient hardware to run those applications. She recently co-published a book on the topic, and will teach a professional education course on how to design efficient deep learning systems in June.

On April 29, Sze will join Assistant Professor Song Han for an MIT Quest AI Roundtable on the co-design of efficient hardware and software moderated by Aude Oliva, director of MIT Quest Corporate and the MIT director of the MIT-IBM Watson AI Lab. Here, Sze discusses her recent work.

Q: Why do we need low-power AI now?

A: AI applications are moving to smartphones, tiny robots, and internet-connected appliances and other devices with limited power and processing capabilities. The challenge is that AI has high computing requirements. Analyzing sensor and camera data from a self-driving car can consume about 2,500 watts, but the computing budget of a smartphone is just about a single watt. Closing this gap requires rethinking the entire stack, a trend that will define the next decade of AI.

Q: What’s the big deal about running AI on a smartphone?

A: It means that the data processing no longer has to take place in the “cloud,” on racks of warehouse servers. Untethering compute from the cloud allows us to broaden AI’s reach. It gives people in developing countries with limited communication infrastructure access to AI. It also speeds up response time by reducing the lag caused by communicating with distant servers. This is crucial for interactive applications like autonomous navigation and augmented reality, which need to respond instantaneously to changing conditions. Processing data on the device can also protect medical and other sensitive records. Data can be processed right where they’re collected.

Q: What makes modern AI so inefficient?

A: The cornerstone of modern AI — deep neural networks — can require hundreds of millions to billions of calculations — orders of magnitude greater than compressing video on a smartphone. But it’s not just number crunching that makes deep networks energy-intensive — it’s the cost of shuffling data to and from memory to perform these computations. The farther the data have to travel, and the more data there are, the greater the bottleneck.

Q: How are you redesigning AI hardware for greater energy efficiency?

A: We focus on reducing data movement and the amount of data needed for computation. In some deep networks, the same data are used multiple times for different computations. We design specialized hardware to reuse data locally rather than send them off-chip. Storing reused data on-chip makes the process extremely energy-efficient.  

We also optimize the order in which data are processed to maximize their reuse. That’s the key property of the Eyeriss chip that was developed in collaboration with Joel Emer. In our followup work, Eyeriss v2, we made the chip flexible enough to reuse data across a wider range of deep networks. The Eyeriss chip also uses compression to reduce data movement, a common tactic among AI chips. The low-power Navion chip that was developed in collaboration with Sertac Karaman for mapping and navigation applications in robotics uses two to three orders of magnitude less energy than a CPU, in part by using optimizations that reduce the amount of data processed and stored on-chip. 

Q: What changes have you made on the software side to boost efficiency?

A: The more that software aligns with hardware-related performance metrics like energy efficiency, the better we can do. Pruning, for example, is a popular way to remove weights from a deep network to reduce computation costs. But rather than remove weights based on their magnitude, our work on energy-aware pruning suggests you can remove the more energy-intensive weights to improve overall energy consumption. Another method we’ve developed, NetAdapt, automates the process of adapting and optimizing a deep network for a smartphone or other hardware platforms. Our recent followup work, NetAdaptv2, accelerates the optimization process to further boost efficiency.

Q: What low-power AI applications are you working on?

A: I’m exploring autonomous navigation for low-energy robots with Sertac Karaman. I’m also working with Thomas Heldt to develop a low-cost and potentially more effective way of diagnosing and monitoring people with neurodegenerative disorders like Alzheimer’s and Parkinson’s by tracking their eye movements. Eye-movement properties like reaction time could potentially serve as biomarkers for brain function. In the past, eye-movement tracking took place in clinics because of the expensive equipment required. We’ve shown that an ordinary smartphone camera can take measurements from a patient’s home, making data collection easier and less costly. This could help to monitor disease progression and track improvements in clinical drug trials.

Q: Where is low-power AI headed next?

A: Reducing AI’s energy requirements will extend AI to a wider range of embedded devices, extending its reach into tiny robots, smart homes, and medical devices. A key challenge is that efficiency often requires a tradeoff in performance. For wide adoption, it will be important to dig deeper into these different applications to establish the right balance between efficiency and accuracy.

Navigating beneath the Arctic ice

For scientists to understand the role the changing environment in the Arctic Ocean plays in global climate change, there is a need to map the ocean below the ice cover. Image credits: Troy Barnhart, Chief Petty Officer, U.S. Navy

By Mary Beth Gallagher | Department of Mechanical Engineering

There is a lot of activity beneath the vast, lonely expanses of ice and snow in the Arctic. Climate change has dramatically altered the layer of ice that covers much of the Arctic Ocean. Areas of water that used to be covered by a solid ice pack are now covered by thin layers only 3 feet deep. Beneath the ice, a warm layer of water, part of the Beaufort Lens, has changed the makeup of the aquatic environment.    

For scientists to understand the role this changing environment in the Arctic Ocean plays in global climate change, there is a need for mapping the ocean below the ice cover.

A team of MIT engineers and naval officers led by Henrik Schmidt, professor of mechanical and ocean engineering, is trying to understand environmental changes, their impact on acoustic transmission beneath the surface, and how these changes affect navigation and communication for vehicles traveling below the ice.

“Basically, what we want to understand is how does this new Arctic environment brought about by global climate change affect the use of underwater sound for communication, navigation, and sensing?” explains Schmidt.

To answer this question, Schmidt traveled to the Arctic with members of the Laboratory for Autonomous Marine Sensing Systems (LAMSS) including Daniel Goodwin and Bradli Howard, graduate students in the MIT-Woods Hole Oceanographic Institution Joint Program in oceanographic engineering.

With funding from the Office of Naval Research, the team participated in ICEX — or Ice Exercise — 2020, a three-week program hosted by the U.S. Navy, where military personnel, scientists, and engineers work side-by-side executing a variety of research projects and missions.

A strategic waterway

The rapidly changing environment in the Arctic has wide-ranging impacts. In addition to giving researchers more information about the impact of global warming and the effects it has on marine mammals, the thinning ice could potentially open up new shipping lanes and trade routes in areas that were previously untraversable.

Perhaps most crucially for the U.S. Navy, understanding the altered environment also has geopolitical importance.

“If the Arctic environment is changing and we don’t understand it, that could have implications in terms of national security,” says Goodwin.

Several years ago, Schmidt and his colleague Arthur Baggeroer, professor of mechanical and ocean engineering, were among the first to recognize that the warmer waters, part of the Beaufort Lens, coupled with the changing ice composition, impacted how sound traveled in the water.

To successfully navigate throughout the Arctic, the U.S. Navy and other entities in the region need to understand how these changes in sound propagation affect a vehicle’s ability to communicate and navigate through the water.

Using an unpiloted, autonomous underwater vehicle (AUV) built by General Dynamics-Mission Systems (GD-MS), and a system of sensors rigged on buoys developed by the Woods Hole Oceanographic Institution, Schmidt and his team, joined by Dan McDonald and Josiah DeLange of GD-MS, set out to demonstrate a new integrated acoustic communication and navigation concept.

The research team prepares to deploy an autonomous underwater vehicle built by General Dynamics Mission Systems to test their navigational concept. Image credits: Daniel Goodwin, LCDR, USN

The framework, which was also supported and developed by LAMSS members Supun Randeni, EeShan Bhatt, Rui Chen, and Oscar Viquez, as well as LAMSS alumnus Toby Schneider of GobySoft LLC, would allow vehicles to travel through the water with GPS-level accuracy while employing oceanographic sensors for data collection.

“In order to prove that you can use this navigational concept in the Arctic, we have to first ensure we fully understand the environment that we’re operating in,” adds Goodwin.

Understanding the environment below

After arriving at the Arctic Submarine Lab’s ice camp last spring, the research team deployed a number of conductivity-temperature-depth probes to gather data about the aquatic environment in the Arctic.

“By using temperature and salinity as a function of depth, we calculate the sound speed profile. This helps us understand if the AUV’s location is good for communication or bad,” says Howard, who was responsible for monitoring environmental changes to the water column throughout ICEX.

A team including professor Henrik Schmidt, MIT-WHOI Joint Program graduate students Daniel Goodwin and Bradli Howard, members of the Laboratory for Autonomous Marine Sensing Systems, and the Arctic Submarine Lab, traveled to the Arctic in March 2020 as part of the ICEX 2020, a three-week program hosted by the U.S. Navy, where military personnel, scientists and engineers work side-by-side executing a variety of research projects and missions. Image credits: MIke Demello, Artict Submarine Laboratory

Because of the way sound bends in water, through a concept known as Snell’s Law, sine-like pressure waves collect in some parts of the water column and disperse in others. Understanding the propagation trajectories is key to predicting good and bad locations for the AUV to operate.  

To map the areas of the water with optimal acoustic properties, Howard modified the traditional signal-to-noise-ratio (SNR) by using a metric known as the multi-path penalty (MPP), which penalizes areas where the AUV receives echoes of the messages. As a result, the vehicle prioritizes operations in areas with less reverb.

These data allowed the team to identify exactly where the vehicle should be positioned in the water column for optimal communications which results in accurate navigation.

While Howard gathered data on how the characteristics of the water impact acoustics, Goodwin focused on how sound is projected and reflected off the ever-changing ice on the surface.

To get these data, the AUV was outfitted with a device that measured the motion of the vehicle relative to the ice above. That sound was picked up by several receivers attached to moorings hanging from the ice.

The data from the vehicle and the receivers were then used by the researchers to compute exactly where the vehicle was at a given time. This location information, together with the data Howard gathered on the acoustic environment in the water, offer a new navigational concept for vehicles traveling in the Arctic Sea.

Protecting the Arctic

After a series of setbacks and challenges due to the unforgiving conditions in the Arctic, the team was able to successfully prove their navigational concept worked. Thanks to the team’s efforts, naval operations and future trade vessels may be able to take advantage of the changing conditions in the Arctic to maximize navigational accuracy and improve underwater communications.

After a series of setbacks and challenges due to the unforgiving conditions in the Arctic, the team was able to successfully prove their navigational concept worked. Image credits: Dan McDonald, General Dynamics Mission Systems

“Our work could improve the ability for the U.S. Navy to safely and effectively operate submarines under the ice for extended periods,” Howard says.

Howard acknowledges that in addition to the changes in physical climate, the geopolitical climate continues to change. This only strengthens the need for improved navigation in the Arctic.

“The U.S. Navy’s goal is to preserve peace and protect global trade by ensuring freedom of navigation throughout the world’s oceans,” she adds. “The navigational concept we proved during ICEX will serve to help the Navy in that mission.”

A robot that senses hidden objects

MIT researchers developed a picking robot that combines vision with radio frequency (RF) sensing to find and grasps objects, even if they’re hidden from view. The technology could aid fulfilment in e-commerce warehouses. Credits: Courtesy of the researchers

By Daniel Ackerman | MIT News Office

In recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says.

The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.

The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper’s lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.

As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That’s in part because robots struggle to locate and grasp objects in such a crowded environment. “Perception and picking are two roadblocks in the industry today,” says Rodriguez. Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls.

But radio waves can.

For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.

The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.

“RF is such a different sensing modality than vision,” says Rodriguez. “It would be a mistake not to explore what RF can do.”

RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they’re fully blocked from the camera’s view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot’s wrist. The RF reader stands independent of the robot and relays tracking information to the robot’s control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot’s decision making was one of the biggest challenges the researchers faced.

“The robot has to decide, at each point in time, which of these streams is more important to think about,” says Boroushaki. “It’s not just eye-hand coordination, it’s RF-eye-hand coordination. So, the problem gets very complicated.”

The robot initiates the seek-and-pluck process by pinging the target object’s RF tag for a sense of its whereabouts. “It starts by using RF to focus the attention of vision,” says Adib. “Then you use vision to navigate fine maneuvers.” The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren’s source.

With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot’s decision making.

RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to “declutter” its environment — removing packing materials and other obstacles in its way in order to access the target. Rodriguez says this demonstrates RF Grasp’s “unfair advantage” over robots without penetrative RF sensing. “It has this guidance that other systems simply don’t have.”

RF Grasp could one day perform fulfilment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item’s identity without the need to manipulate the item, expose its barcode, then scan it. “RF has the potential to improve some of those limitations in industry, especially in perception and localization,” says Rodriguez.

Adib also envisions potential home applications for the robot, like locating the right Allen wrench to assemble your Ikea chair. “Or you could imagine the robot finding lost items. It’s like a super-Roomba that goes and retrieves my keys, wherever the heck I put them.”

The research is sponsored by the National Science Foundation, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).

Researchers’ algorithm designs soft robots that sense

MIT researchers have developed a deep learning neural network to aid the design of soft-bodied robots, such as these iterations of a robotic elephant. Image: courtesy of the researchers

By Daniel Ackerman | MIT News Office

There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.

MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. “The system not only learns a given task, but also how to best design the robot to solve that task,” says Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”

The research will be presented during April’s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.

Creating soft robots that complete real-world tasks has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots’ finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots are not so tractable.

Soft-bodied robots are flexible and pliant — they generally feel more like a bouncy ball than a bowling ball. “The main problem with soft robots is that they are infinitely dimensional,” says Spielberg. “Any point on a soft-bodied robot can, in theory, deform in any way possible.” That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot’s position and feed that information back into the robot’s control program. But the researchers wanted to create a soft robot untethered from external aid.

“You can’t put an infinite number of sensors on the robot itself,” says Spielberg. “So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?” The team turned to deep learning for an answer.

The researchers developed a novel neural network architecture that both optimizes sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot’s body into regions called “particles.” Each particle’s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network “learns” the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks’ subsequent trials.

By optimizing the most important particles, the network also suggests where sensors should be placed on the robot to ensure efficient performance. For example, in a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and around the fingers, where precisely controlled interactions with the environment are vital to the robot’s ability to manipulate objects. While that may seem obvious, it turns out the algorithm vastly outperformed humans’ intuition on where to site the sensors.

The researchers pitted their algorithm against a series of expert predictions. For three different soft robot layouts, the team asked roboticists to manually select where sensors should be placed to enable the efficient completion of tasks like grasping various objects. Then they ran simulations comparing the human-sensorized robots to the algorithm-sensorized robots. And the results weren’t close. “Our model vastly outperformed humans for each task, even though I looked at some of the robot bodies and felt very confident on where the sensors should go,” says Amini. “It turns out there are a lot more subtleties in this problem than we initially expected.”

Spielberg says their work could help to automate the process of robot design. In addition to developing algorithms to control a robot’s movements, “we also need to think about how we’re going to sensorize these robots, and how that will interplay with other components of that system,” he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. “That’s something where you need a very robust, well-optimized sense of touch,” says Spielberg. “So, there’s potential for immediate impact.”

“Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,” says Rus. “The sensors are an important aspect of the process, as they enable the soft robot to “see” and understand the world and its relationship with the world.”

This research was funded, in part, by the National Science Foundation and the Fannie and John Hertz Foundation.

System detects errors when medication is self-administered

The new technology pairs wireless sensing with artificial intelligence to determine when a patient is using an insulin pen or inhaler, and it flags potential errors in the patient’s administration method. | Image: courtery of the researchers

From swallowing pills to injecting insulin, patients frequently administer their own medication. But they don’t always get it right. Improper adherence to doctors’ orders is commonplace, accounting for thousands of deaths and billions of dollars in medical costs annually. MIT researchers have developed a system to reduce those numbers for some types of medications.

The new technology pairs wireless sensing with artificial intelligence to determine when a patient is using an insulin pen or inhaler, and flags potential errors in the patient’s administration method. “Some past work reports that up to 70% of patients do not take their insulin as prescribed, and many patients do not use inhalers properly,” says Dina Katabi, the Andrew and Erna Viteri Professor at MIT, whose research group has developed the new solution. The researchers say the system, which can be installed in a home, could alert patients and caregivers to medication errors and potentially reduce unnecessary hospital visits.

The research appears today in the journal Nature Medicine. The study’s lead authors are Mingmin Zhao, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and Kreshnik Hoti, a former visiting scientist at MIT and current faculty member at the University of Prishtina in Kosovo. Other co-authors include Hao Wang, a former CSAIL postdoc and current faculty member at Rutgers University, and Aniruddh Raghu, a CSAIL PhD student.

Some common drugs entail intricate delivery mechanisms. “For example, insulin pens require priming to make sure there are no air bubbles inside. And after injection, you have to hold for 10 seconds,” says Zhao. “All those little steps are necessary to properly deliver the drug to its active site.” Each step also presents opportunity for errors, especially when there’s no pharmacist present to offer corrective tips. Patients might not even realize when they make a mistake — so Zhao’s team designed an automated system that can.

Their system can be broken down into three broad steps. First, a sensor tracks a patient’s movements within a 10-meter radius, using radio waves that reflect off their body. Next, artificial intelligence scours the reflected signals for signs of a patient self-administering an inhaler or insulin pen. Finally, the system alerts the patient or their health care provider when it detects an error in the patient’s self-administration.

Wireless sensing technology could help improve patients’ technique with inhalers and insulin pens. | Image: Christine Daniloff, MIT

The researchers adapted their sensing method from a wireless technology they’d previously used to monitor people’s sleeping positions. It starts with a wall-mounted device that emits very low-power radio waves. When someone moves, they modulate the signal and reflect it back to the device’s sensor. Each unique movement yields a corresponding pattern of modulated radio waves that the device can decode. “One nice thing about this system is that it doesn’t require the patient to wear any sensors,” says Zhao. “It can even work through occlusions, similar to how you can access your Wi-Fi when you’re in a different room from your router.”

The new sensor sits in the background at home, like a Wi-Fi router, and uses artificial intelligence to interpret the modulated radio waves. The team developed a neural network to key in on patterns indicating the use of an inhaler or insulin pen. They trained the network to learn those patterns by performing example movements, some relevant (e.g. using an inhaler) and some not (e.g. eating). Through repetition and reinforcement, the network successfully detected 96 percent of insulin pen administrations and 99 percent of inhaler uses.

Once it mastered the art of detection, the network also proved useful for correction. Every proper medicine administration follows a similar sequence — picking up the insulin pen, priming it, injecting, etc. So, the system can flag anomalies in any particular step. For example, the network can recognize if a patient holds down their insulin pen for five seconds instead of the prescribed 10 seconds. The system can then relay that information to the patient or directly to their doctor, so they can fix their technique.

“By breaking it down into these steps, we can not only see how frequently the patient is using their device, but also assess their administration technique to see how well they’re doing,” says Zhao.

The researchers say a key feature of their radio wave-based system is its noninvasiveness. “An alternative way to solve this problem is by installing cameras,” says Zhao. “But using a wireless signal is much less intrusive. It doesn’t show peoples’ appearance.”

He adds that their framework could be adapted to medications beyond inhalers and insulin pens — all it would take is retraining the neural network to recognize the appropriate sequence of movements. Zhao says that “with this type of sensing technology at home, we could detect issues early on, so the person can see a doctor before the problem is exacerbated.”

Fostering ethical thinking in computing

A case studies series from the Social and Ethical Responsibilities of Computing program at MIT delves into a range of topics, from social and ethical implications of computing technologies and the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings to the biases of risk prediction algorithms in the criminal justice system and the politicization of data collection.

By Terri Park | MIT Schwarzman College of Computing

Traditional computer scientists and engineers are trained to develop solutions for specific needs, but aren’t always trained to consider their broader implications. Each new technology generation, and particularly the rise of artificial intelligence, leads to new kinds of systems, new ways of creating tools, and new forms of data, for which norms, rules, and laws frequently have yet to catch up. The kinds of impact that such innovations have in the world has often not been apparent until many years later.

As part of the efforts in Social and Ethical Responsibilities of Computing (SERC) within the MIT Stephen A. Schwarzman College of Computing, a new case studies series examines social, ethical, and policy challenges of present-day efforts in computing with the aim of facilitating the development of responsible “habits of mind and action” for those who create and deploy computing technologies.

“Advances in computing have undeniably changed much of how we live and work. Understanding and incorporating broader social context is becoming ever more critical,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing. “This case study series is designed to be a basis for discussions in the classroom and beyond, regarding social, ethical, economic, and other implications so that students and researchers can pursue the development of technology across domains in a holistic manner that addresses these important issues.”

A modular system

By design, the case studies are brief and modular to allow users to mix and match the content to fit a variety of pedagogical needs. Series editors David Kaiser and Julie Shah, who are the associate deans for SERC, structured the cases primarily to be appropriate for undergraduate instruction across a range of classes and fields of study.

“Our goal was to provide a seamless way for instructors to integrate cases into an existing course or cluster several cases together to support a broader module within a course. They might also use the cases as a starting point to design new courses that focus squarely on themes of social and ethical responsibilities of computing,” says Kaiser, the Germeshausen Professor of the History of Science and professor of physics.

Shah, an associate professor of aeronautics and astronautics and a roboticist who designs systems in which humans and machines operate side by side, expects that the cases will also be of interest to those outside of academia, including computing professionals, policy specialists, and general readers. In curating the series, Shah says that “we interpret ‘social and ethical responsibilities of computing’ broadly to focus on perspectives of people who are affected by various technologies, as well as focus on perspectives of designers and engineers.”

The cases are not limited to a particular format and can take shape in various forms — from a magazine-like feature article or Socratic dialogues to choose-your-own-adventure stories or role-playing games grounded in empirical research. Each case study is brief, but includes accompanying notes and references to facilitate more in-depth exploration of a given topic. Multimedia projects will also be considered. “The main goal is to present important material — based on original research — in engaging ways to broad audiences of non-specialists,” says Kaiser.

The SERC case studies are specially commissioned and written by scholars who conduct research centrally on the subject of the piece. Kaiser and Shah approached researchers from within MIT as well as from other academic institutions to bring in a mix of diverse voices on a spectrum of topics. Some cases focus on a particular technology or on trends across platforms, while others assess social, historical, philosophical, legal, and cultural facets that are relevant for thinking critically about current efforts in computing and data sciences.

The cases published in the inaugural issue place readers in various settings that challenge them to consider the social and ethical implications of computing technologies, such as how social media services and surveillance tools are built; the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings; the biases of risk prediction algorithms in the criminal justice system; and the politicization of data collection.

“Most of us agree that we want computing to work for social good, but which good? Whose good? Whose needs and values and worldviews are prioritized and whose are overlooked?” says Catherine D’Ignazio, an assistant professor of urban science and planning and director of the Data + Feminism Lab at MIT.

D’Ignazio’s case for the series, co-authored with Lauren Klein, an associate professor in the English and Quantitative Theory and Methods departments at Emory University, introduces readers to the idea that while data are useful, they are not always neutral. “These case studies help us understand the unequal histories that shape our technological systems as well as study their disparate outcomes and effects. They are an exciting step towards holistic, sociotechnical thinking and making.”

Rigorously reviewed

Kaiser and Shah formed an editorial board composed of 55 faculty members and senior researchers associated with 19 departments, labs, and centers at MIT, and instituted a rigorous peer-review policy model commonly adopted by specialized journals. Members of the editorial board will also help commission topics for new cases and help identify authors for a given topic.

For each submission, the series editors collect four to six peer reviews, with reviewers mostly drawn from the editorial board. For each case, half the reviewers come from fields in computing and data sciences and half from fields in the humanities, arts, and social sciences, to ensure balance of topics and presentation within a given case study and across the series.

“Over the past two decades I’ve become a bit jaded when it comes to the academic review process, and so I was particularly heartened to see such care and thought put into all of the reviews,” says Hany Farid, a professor at the University of California at Berkeley with a joint appointment in the Department of Electrical Engineering and Computer Sciences and the School of Information. “The constructive review process made our case study significantly stronger.”

Farid’s case, “The Dangers of Risk Prediction in the Criminal Justice System,” which he penned with Julia Dressel, recently a student of computer science at Dartmouth College, is one of the four commissioned pieces featured in the inaugural issue.

Cases are additionally reviewed by undergraduate volunteers, who help the series editors gauge each submission for balance, accessibility for students in multiple fields of study, and possibilities for adoption in specific courses. The students also work with them to create original homework problems and active learning projects to accompany each case study, to further facilitate adoption of the original materials across a range of existing undergraduate subjects.

“I volunteered to work with this group because I believe that it’s incredibly important for those working in computer science to include thinking about ethics not as an afterthought, but integrated into every step and decision that is made, says Annie Snyder, a mathematical economics sophomore and a member of the MIT Schwarzman College of Computing’s Undergraduate Advisory Group. “While this is a massive issue to take on, this project is an amazing opportunity to start building an ethical culture amongst the incredibly talented students at MIT who will hopefully carry it forward into their own projects and workplace.”

New sets of case studies, produced with support from the MIT Press’ Open Publishing Services program, will be published twice a year via the Knowledge Futures Group’s PubPub platform. The SERC case studies are made available for free on an open-access basis, under Creative Commons licensing terms. Authors retain copyright, enabling them to reuse and republish their work in more specialized scholarly publications.

“It was important to us to approach this project in an inclusive way and lower the barrier for people to be able to access this content. These are complex issues that we need to deal with, and we hope that by making the cases widely available, more people will engage in social and ethical considerations as they’re studying and developing computing technologies,” says Shah.

Researchers introduce a new generation of tiny, agile drones

Insects’ remarkable acrobatic traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Image: courtesy of Kevin Yufeng Chen

By Daniel Ackerman

If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.

Chen, a member of the Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, has developed insect-sized drones with unprecedented dexterity and resilience. The aerial robots are powered by a new class of soft actuator, which allows them to withstand the physical travails of real-world flight. Chen hopes the robots could one day aid humans by pollinating crops or performing machinery inspections in cramped spaces.

Chen’s work appears this month in the journal IEEE Transactions on Robotics. His co-authors include MIT PhD student Zhijian Ren, Harvard University PhD student Siyi Xu, and City University of Hong Kong roboticist Pakpong Chirarattananon.

Typically, drones require wide open spaces because they’re neither nimble enough to navigate confined spaces nor robust enough to withstand collisions in a crowd. “If we look at most drones today, they’re usually quite big,” says Chen. “Most of their applications involve flying outdoors. The question is: Can you create insect-scale robots that can move around in very complex, cluttered spaces?”

According to Chen, “The challenge of building small aerial robots is immense.” Pint-sized drones require a fundamentally different construction from larger ones. Large drones are usually powered by motors, but motors lose efficiency as you shrink them. So, Chen says, for insect-like robots “you need to look for alternatives.”

The principal alternative until now has been employing a small, rigid actuator built from piezoelectric ceramic materials. While piezoelectric ceramics allowed the first generation of tiny robots to take flight, they’re quite fragile. And that’s a problem when you’re building a robot to mimic an insect — foraging bumblebees endure a collision about once every second.

Chen designed a more resilient tiny drone using soft actuators instead of hard, fragile ones. The soft actuators are made of thin rubber cylinders coated in carbon nanotubes. When voltage is applied to the carbon nanotubes, they produce an electrostatic force that squeezes and elongates the rubber cylinder. Repeated elongation and contraction causes the drone’s wings to beat — fast.

Chen’s actuators can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” says Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.

“Achieving flight with a centimeter-scale robot is always an impressive feat,” says Farrell Helbling, an assistant professor of electrical and computer engineering at Cornell University, who was not involved in the research. “Because of the soft actuators’ inherent compliance, the robot can safely run into obstacles without greatly inhibiting flight. This feature is well-suited for flight in cluttered, dynamic environments and could be very useful for any number of real-world applications.”

Helbling adds that a key step toward those applications will be untethering the robots from a wired power source, which is currently required by the actuators’ high operating voltage. “I’m excited to see how the authors will reduce operating voltage so that they may one day be able to achieve untethered flight in real-world environments.”

Building insect-like robots can provide a window into the biology and physics of insect flight, a longstanding avenue of inquiry for researchers. Chen’s work addresses these questions through a kind of reverse engineering. “If you want to learn how insects fly, it is very instructive to build a scale robot model,” he says. “You can perturb a few things and see how it affects the kinematics or how the fluid forces change. That will help you understand how those things fly.” But Chen aims to do more than add to entomology textbooks. His drones can also be useful in industry and agriculture.

Chen says his mini-aerialists could navigate complex machinery to ensure safety and functionality. “Think about the inspection of a turbine engine. You’d want a drone to move around [an enclosed space] with a small camera to check for cracks on the turbine plates.”

Other potential applications include artificial pollination of crops or completing search-and-rescue missions following a disaster. “All those things can be very challenging for existing large-scale robots,” says Chen. Sometimes, bigger isn’t better.

Fabricating fully functional drones

A laser-cutter with a custom end-effector
The MIT CSAIL team’s LaserFactory system can manufacture functional, custom-made devices and robots, without human intervention, potentially enabling rapid prototyping of items like wearables, robots, and printed electronics. Credits: Photo courtesy of the researchers.

By Rachel Gordon | MIT CSAIL

From Star Trek’s replicators to Richie Rich’s wishing machine, popular culture has a long history of parading flashy machines that can instantly output any item to a user’s delight. 

While 3D printers have now made it possible to produce a range of objects that include product models, jewelry, and novelty toys, we still lack the ability to fabricate more complex devices that are essentially ready-to-go right out of the printer. 

A group from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) recently developed a new system to print functional, custom-made devices and robots, without human intervention. Their single system uses a three-ingredient recipe that lets users create structural geometry, print traces, and assemble electronic components like sensors and actuators. 

“LaserFactory” has two parts that work in harmony: a software toolkit that allows users to design custom devices, and a hardware platform that fabricates them. 

CSAIL PhD student Martin Nisser says that this type of “one-stop shop” could be beneficial for product developers, makers, researchers, and educators looking to rapidly prototype things like wearables, robots, and printed electronics. 

“Making fabrication inexpensive, fast, and accessible to a layman remains a challenge,” says Nisser, lead author on a paper about LaserFactory that will appear in the ACM Conference on Human Factors in Computing Systems in May. “By leveraging widely available manufacturing platforms like 3D printers and laser cutters, LaserFactory is the first system that integrates these capabilities and automates the full pipeline for making functional devices in one system.” 

Inside LaserFactory

Let’s say a user has aspirations to create their own drone. They’d first design their device by placing components on it from a parts library, and then draw on circuit traces, which are the copper or aluminum lines on a printed circuit board that allow electricity to flow between electronic components. They’d then finalize the drone’s geometry in the 2D editor. In this case, they’d use propellers and batteries on the canvas, wire them up to make electrical connections, and draw the perimeter to define the quadcopter’s shape. 

The user can then preview their design before the software translates their custom blueprint into machine instructions. The commands are embedded into a single fabrication file for LaserFactory to make the device in one go, aided by the standard laser cutter software. On the hardware side, an add-on that prints circuit traces and assembles components is clipped onto the laser cutter. 

Similar to a chef, LaserFactory automatically cuts the geometry, dispenses silver for circuit traces, picks and places components, and finally cures the silver to make the traces conductive, securing the components in place to complete fabrication. 

The device is then fully functional, and in the case of the drone, it can immediately take off to begin a task — a feature that could in theory be used for diverse jobs such as delivery or search-and-rescue operations.

LaserFactory cutting a drone
LaserFactory (in background) can fabricate fully functional drones like this quadcopter. Credits: Photo courtesy of the researchers.

As a future avenue, the team hopes to increase the quality and resolution of the circuit traces, which would allow for denser and more complex electronics. 

As well as fine-tuning the current system, the researchers hope to build on this technology by exploring how to create a fuller range of 3D geometries, potentially through integrating traditional 3D printing into the process. 

“Beyond engineering, we’re also thinking about how this kind of one-stop shop for fabrication devices could be optimally integrated into today’s existing supply chains for manufacturing, and what challenges we may need to solve to allow for that to happen,” says Nisser. “In the future, people shouldn’t be expected to have an engineering degree to build robots, any more than they should have a computer science degree to install software.” 

This research is based upon work supported by the National Science Foundation. The work was also supported by a Microsoft Research Faculty Fellowship and The Royal Swedish Academy of Sciences.

Designing customized “brains” for robots

Illustration of robot surrounded by checked/crossed brains
MIT researchers have developed an automated way to design customized hardware, or “brains,” that speeds up a robot’s operation. Image credits: Jose-Luis Olivares, MIT

By Daniel Ackerman

Contemporary robots can move quickly. “The motors are fast, and they’re powerful,” says Sabrina Neuman.

Yet in complex situations, like interactions with people, robots often don’t move quickly. “The hang up is what’s going on in the robot’s head,” she adds.

Perceiving stimuli and calculating a response takes a “boatload of computation,” which limits reaction time, says Neuman, who recently graduated with a PhD from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Neuman has found a way to fight this mismatch between a robot’s “mind” and body. The method, called robomorphic computing, uses a robot’s physical layout and intended applications to generate a customized computer chip that minimizes the robot’s response time.

The advance could fuel a variety of robotics applications, including, potentially, frontline medical care of contagious patients. “It would be fantastic if we could have robots that could help reduce risk for patients and hospital workers,” says Neuman.

Neuman will present the research at this April’s International Conference on Architectural Support for Programming Languages and Operating Systems. MIT co-authors include graduate student Thomas Bourgeat and Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Neuman’s PhD advisor. Other co-authors include Brian Plancher, Thierry Tambe, and Vijay Janapa Reddi, all of Harvard University. Neuman is now a postdoctoral NSF Computing Innovation Fellow at Harvard’s School of Engineering and Applied Sciences.

There are three main steps in a robot’s operation, according to Neuman. The first is perception, which includes gathering data using sensors or cameras. The second is mapping and localization: “Based on what they’ve seen, they have to construct a map of the world around them and then localize themselves within that map,” says Neuman. The third step is motion planning and control — in other words, plotting a course of action.

These steps can take time and an awful lot of computing power. “For robots to be deployed into the field and safely operate in dynamic environments around humans, they need to be able to think and react very quickly,” says Plancher. “Current algorithms cannot be run on current CPU hardware fast enough.”

Neuman adds that researchers have been investigating better algorithms, but she thinks software improvements alone aren’t the answer. “What’s relatively new is the idea that you might also explore better hardware.” That means moving beyond a standard-issue CPU processing chip that comprises a robot’s brain — with the help of hardware acceleration.

Hardware acceleration refers to the use of a specialized hardware unit to perform certain computing tasks more efficiently. A commonly used hardware accelerator is the graphics processing unit (GPU), a chip specialized for parallel processing. These devices are handy for graphics because their parallel structure allows them to simultaneously process thousands of pixels. “A GPU is not the best at everything, but it’s the best at what it’s built for,” says Neuman. “You get higher performance for a particular application.” Most robots are designed with an intended set of applications and could therefore benefit from hardware acceleration. That’s why Neuman’s team developed robomorphic computing.

The system creates a customized hardware design to best serve a particular robot’s computing needs. The user inputs the parameters of a robot, like its limb layout and how its various joints can move. Neuman’s system translates these physical properties into mathematical matrices. These matrices are “sparse,” meaning they contain many zero values that roughly correspond to movements that are impossible given a robot’s particular anatomy. (Similarly, your arm’s movements are limited because it can only bend at certain joints — it’s not an infinitely pliable spaghetti noodle.)

The system then designs a hardware architecture specialized to run calculations only on the non-zero values in the matrices. The resulting chip design is therefore tailored to maximize efficiency for the robot’s computing needs. And that customization paid off in testing.

Hardware architecture designed using this method for a particular application outperformed off-the-shelf CPU and GPU units. While Neuman’s team didn’t fabricate a specialized chip from scratch, they programmed a customizable field-programmable gate array (FPGA) chip according to their system’s suggestions. Despite operating at a slower clock rate, that chip performed eight times faster than the CPU and 86 times faster than the GPU.

“I was thrilled with those results,” says Neuman. “Even though we were hamstrung by the lower clock speed, we made up for it by just being more efficient.”

Plancher sees widespread potential for robomorphic computing. “Ideally we can eventually fabricate a custom motion-planning chip for every robot, allowing them to quickly compute safe and efficient motions,” he says. “I wouldn’t be surprised if 20 years from now every robot had a handful of custom computer chips powering it, and this could be one of them.” Neuman adds that robomorphic computing might allow robots to relieve humans of risk in a range of settings, such as caring for covid-19 patients or manipulating heavy objects.

“This work is exciting because it shows how specialized circuit designs can be used to accelerate a core component of robot control,” says Robin Deits, a robotics engineer at Boston Dynamics who was not involved in the research. “Software performance is crucial for robotics because the real world never waits around for the robot to finish thinking.” He adds that Neuman’s advance could enable robots to think faster, “unlocking exciting behaviors that previously would be too computationally difficult.”

Neuman next plans to automate the entire system of robomorphic computing. Users will simply drag and drop their robot’s parameters, and “out the other end comes the hardware description. I think that’s the thing that’ll push it over the edge and make it really useful.”

This research was funded by the National Science Foundation, the Computing Research Agency, the CIFellows Project, and the Defense Advanced Research Projects Agency.

Versatile building blocks make structures with surprising mechanical properties

Mechanical metamaterials
CBA researchers have created four different types of novel subunits, called voxels (a 3D variation on the pixels of a 2D image). Left to right: rigid (grey), compliant (purple), auxetic (orange), chiral (blue). Image credits: Benjamin Jenett, CBA

By David L. Chandler

Researchers at MIT’s Center for Bits and Atoms have created tiny building blocks that exhibit a variety of unique mechanical properties, such as the ability to produce a twisting motion when squeezed. These subunits could potentially be assembled by tiny robots into a nearly limitless variety of objects with built-in functionality, including vehicles, large industrial parts, or specialized robots that can be repeatedly reassembled in different forms.

The researchers created four different types of these subunits, called voxels (a 3D variation on the pixels of a 2D image). Each voxel type exhibits special properties not found in typical natural materials, and in combination they can be used to make devices that respond to environmental stimuli in predictable ways. Examples might include airplane wings or turbine blades that respond to changes in air pressure or wind speed by changing their overall shape.

The findings, which detail the creation of a family of discrete “mechanical metamaterials,” are described in a paper published in the journal Science Advances, authored by recent MIT doctoral graduate Benjamin Jenett PhD ’20, Professor Neil Gershenfeld, and four others.

“This remarkable, fundamental, and beautiful synthesis promises to revolutionize the cost, tailorability, and functional efficiency of ultralight, materials-frugal structures,” says Amory Lovins, an adjunct professor of civil and environmental engineering at Stanford University and founder of Rocky Mountain Institute, who was not associated with this work.

Metamaterials get their name because their large-scale properties are different from the microlevel properties of their component materials. They are used in electromagnetics and as “architected” materials, which are designed at the level of their microstructure. “But there hasn’t been much done on creating macroscopic mechanical properties as a metamaterial,” Gershenfeld says.

With this approach, engineers should be able to build structures incorporating a wide range of material properties — and produce them all using the same shared production and assembly processes, Gershenfeld says.

The voxels are assembled from flat frame pieces of injection-molded polymers, then combined into three-dimensional shapes that can be joined into larger functional structures. They are mostly open space and thus provide an extremely lightweight but rigid framework when assembled. Besides the basic rigid unit, which provides an exceptional combination of strength and light weight, there are three other variations of these voxels, each with a different unusual property.

The “auxetic” voxels have a strange property in which a cube of the material, when compressed, instead of bulging out at the sides, actually bulges inward. This is the first demonstration of such a material produced through conventional and inexpensive manufacturing methods.

There are also “compliant” voxels, with a zero Poisson ratio, which is somewhat similar to the auxetic property, but in this case, when the material is compressed, the sides do not change shape at all. Few known materials exhibit this property, which can now be produced through this new approach.

Finally, “chiral” voxels respond to axial compression or stretching with a twisting motion. Again, this is an uncommon property; research that produced one such material through complex fabrication techniques was hailed last year as a significant finding. This work makes this property easily accessible at macroscopic scales.

“Each type of material property we’re showing has previously been its own field,” Gershenfeld says. “People would write papers on just that one property. This is the first thing that shows all of them in one single system.”

To demonstrate the real-world potential of large objects constructed in a LEGO-like manner out of these mass-produced voxels, the team, working in collaboration with engineers at Toyota, produced a functional super-mileage race car, which they demonstrated on a rece track during an international robotics conference earlier this year.

They were able to assemble the lightweight, high-performance structure in just a month, Jenett says, whereas building a comparable structure using conventional fiberglass construction methods had previously taken a year.

During the race, the track was slick from rain, and the race car ended up crashing into a barrier. To the surprise of everyone involved, the car’s lattice-like internal structure deformed and then bounced back, absorbing the shock with little damage. A conventionally built car, Jenett says, would likely have been severely dented if it was made of metal, or shattered if it was composite.

The car provided a vivid demonstration of the fact that these tiny parts can indeed be used to make functional devices at human-sized scales. And, Gershenfeld points out, in the structure of the car, “these aren’t parts connected to something else. The whole thing is made out of nothing but these parts,” except for the motors and power supply.

Because the voxels are uniform in size and composition, they can be combined in any way needed to provide different functions for the resulting device. “We can span a wide range of material properties that before now have been considered very specialized,” Gershenfeld says. “The point is that you don’t have to pick one property. You can make, for example, robots that bend in one direction and are stiff in another direction and move only in certain ways. And so, the big change over our earlier work is this ability to span multiple mechanical material properties, that before now have been considered in isolation.”

Jenett, who carried out much of this work as the basis for his doctoral thesis, says “these parts are low-cost, easily produced, and very fast to assemble, and you get this range of properties all in one system. They’re all compatible with each other, so there’s all these different types of exotic properties, but they all play well with each other in the same scalable, inexpensive system.”

“Think about all the rigid parts and moving parts in cars and robots and boats and planes,” Gershenfeld says. “And we can span all of that with this one system.”

A key factor is that a structure made up of one type of these voxels will behave exactly the same way as the subunit itself, Jenett says. “We were able to demonstrate that the joints effectively disappear when you assemble the parts together. It behaves as a continuum, monolithic material.”

Whereas robotics research has tended to be divided between hard and soft robots, “this is very much neither,” Gershenfeld says, because of its potential to mix and match these properties within a single device.

One of the possible early application of this technology, Jenett says, could be for building the blades of wind turbines. As these structures become ever larger, transporting the blades to their operating site becomes a serious logistical issue, whereas if they are assembled from thousands of tiny subunits, that job can be done at the site, eliminating the transportation issue. Similarly, the disposal of used turbine blades is already becoming a serious problem because of their large size and lack of recyclability. But blades made up of tiny voxels could be disassembled on site, and the voxels then reused to make something else.

And in addition, the blades themselves could be more efficient, because they could have a mix of mechanical properties designed into the structure that would allow them to respond dynamically, passively, to changes in wind strength, he says.

Overall, Jenett says, “Now we have this low-cost, scalable system, so we can design whatever we want to. We can do quadrupeds, we can do swimming robots, we can do flying robots. That flexibility is one of the key benefits of the system.”

Stanford’s Lovins says that this technology “could make inexpensive, durable, extraordinarily lightweight aeronautical flight surfaces that passively and continuously optimize their shape like a bird’s wing. It could also make automobiles’ empty mass more nearly approach their payload, as their crashworthy structure becomes mostly air. It may even permit spherical shells whose crush strength allows a vacuum balloon (with no helium) buoyant in the atmosphere to lift a couple of dozen times the net payload of a jumbo jet.”

He adds, “Like biomimicry and integrative design, this new art of cellular metamaterials is a powerful new tool for helping us do more with less.”

The research team included Filippos Tourlomousis, Alfonso Parra Rubio, and Megan Ochalek at MIT, and Christopher Cameron at the U.S. Army Research Laboratory. The work was supported by NASA, the U.S. Army Research Laboratory and the Center for Bits and Atoms Consortia.

Electronic design tool morphs interactive objects

MorphSensor glasses
An MIT team used MorphSensor to design multiple applications, including a pair of glasses that monitor light absorption to protect eye health. Credits: Photo courtesy of the researchers.

By Rachel Gordon

We’ve come a long way since the first 3D-printed item came to us by way of an eye wash cup, to now being able to rapidly fabricate things like car parts, musical instruments, and even biological tissues and organoids

While much of these objects can be freely designed and quickly made, the addition of electronics to embed things like sensors, chips, and tags usually requires that you design both separately, making it difficult to create items where the added functions are easily integrated with the form. 

Now, a 3D design environment from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) lets users iterate an object’s shape and electronic function in one cohesive space, to add existing sensors to early-stage prototypes.

The team tested the system, called MorphSensor, by modeling an N95 mask with a humidity sensor, a temperature-sensing ring, and glasses that monitor light absorption to protect eye health.

MorphSensor automatically converts electronic designs into 3D models, and then lets users iterate on the geometry and manipulate active sensing parts. This might look like a 2D image of a pair of AirPods and a sensor template, where a person could edit the design until the sensor is embedded, printed, and taped onto the item. 

To test the effectiveness of MorphSensor, the researchers created an evaluation based on standard industrial assembly and testing procedures. The data showed that MorphSensor could match the off-the-shelf sensor modules with small error margins, for both the analog and digital sensors.

“MorphSensor fits into my long-term vision of something called ‘rapid function prototyping’, with the objective to create interactive objects where the functions are directly integrated with the form and fabricated in one go, even for non-expert users,” says CSAIL PhD student Junyi Zhu, lead author on a new paper about the project. “This offers the promise that, when prototyping, the object form could follow its designated function, and the function could adapt to its physical form.” 

MorphSensor in action 

Imagine being able to have your own design lab where, instead of needing to buy new items, you could cost-effectively update your own items using a single system for both design and hardware. 

For example, let’s say you want to update your face mask to monitor surrounding air quality. Using MorphSensor, users would first design or import the 3D face mask model and sensor modules from either MorphSensor’s database or online open-sourced files. The system would then generate a 3D model with individual electronic components (with airwires connected between them) and color-coding to highlight the active sensing components.  

Designers can then drag and drop the electronic components directly onto the face mask, and rotate them based on design needs. As a final step, users draw physical wires onto the design where they want them to appear, using the system’s guidance to connect the circuit. 

Once satisfied with the design, the “morphed sensor” can be rapidly fabricated using an inkjet printer and conductive tape, so it can be adhered to the object. Users can also outsource the design to a professional fabrication house.  

To test their system, the team iterated on EarPods for sleep tracking, which only took 45 minutes to design and fabricate. They also updated a “weather-aware” ring to provide weather advice, by integrating a temperature sensor with the ring geometry. In addition, they manipulated an N95 mask to monitor its substrate contamination, enabling it to alert its user when the mask needs to be replaced.

In its current form, MorphSensor helps designers maintain connectivity of the circuit at all times, by highlighting which components contribute to the actual sensing. However, the team notes it would be beneficial to expand this set of support tools even further, where future versions could potentially merge electrical logic of multiple sensor modules together to eliminate redundant components and circuits and save space (or preserve the object form). 

Zhu wrote the paper alongside MIT graduate student Yunyi Zhu; undergraduates Jiaming Cui, Leon Cheng, Jackson Snowden, and Mark Chounlakone; postdoc Michael Wessely; and Professor Stefanie Mueller. The team will virtually present their paper at the ACM User Interface Software and Technology Symposium. 

This material is based upon work supported by the National Science Foundation.

Robot takes contact-free measurements of patients’ vital signs

By Anne Trafton

During the current coronavirus pandemic, one of the riskiest parts of a health care worker’s job is assessing people who have symptoms of Covid-19. Researchers from MIT, Boston Dynamics, and Brigham and Women’s Hospital hope to reduce that risk by using robots to remotely measure patients’ vital signs.

The robots, which are controlled by a handheld device, can also carry a tablet that allows doctors to ask patients about their symptoms without being in the same room.

“In robotics, one of our goals is to use automation and robotic technology to remove people from dangerous jobs,” says Henwei Huang, an MIT postdoc. “We thought it should be possible for us to use a robot to remove the health care worker from the risk of directly exposing themselves to the patient.”

Using four cameras mounted on a dog-like robot developed by Boston Dynamics, the researchers have shown that they can measure skin temperature, breathing rate, pulse rate, and blood oxygen saturation in healthy patients, from a distance of 2 meters. They are now making plans to test it in patients with Covid-19 symptoms.

“We are thrilled to have forged this industry-academia partnership in which scientists with engineering and robotics expertise worked with clinical teams at the hospital to bring sophisticated technologies to the bedside,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

The researchers have posted a paper on their system on the preprint server techRxiv, and have submitted it to a peer-reviewed journal. Huang is one of the lead authors of the study, along with Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital, and Claas Ehmke, a visiting scholar from ETH Zurich.

Measuring vital signs

When Covid-19 cases began surging in Boston in March, many hospitals, including Brigham and Women’s, set up triage tents outside their emergency departments to evaluate people with Covid-19 symptoms. One major component of this initial evaluation is measuring vital signs, including body temperature.

The MIT and BWH researchers came up with the idea to use robotics to enable contactless monitoring of vital signs, to allow health care workers to minimize their exposure to potentially infectious patients. They decided to use existing computer vision technologies that can measure temperature, breathing rate, pulse, and blood oxygen saturation, and worked to make them mobile.

To achieve that, they used a robot known as Spot, which can walk on four legs, similarly to a dog. Health care workers can maneuver the robot to wherever patients are sitting, using a handheld controller. The researchers mounted four different cameras onto the robot — an infrared camera plus three monochrome cameras that filter different wavelengths of light.

The researchers developed algorithms that allow them to use the infrared camera to measure both elevated skin temperature and breathing rate. For body temperature, the camera measures skin temperature on the face, and the algorithm correlates that temperature with core body temperature. The algorithm also takes into account the ambient temperature and the distance between the camera and the patient, so that measurements can be taken from different distances, under different weather conditions, and still be accurate.

Measurements from the infrared camera can also be used to calculate the patient’s breathing rate. As the patient breathes in and out, wearing a mask, their breath changes the temperature of the mask. Measuring this temperature change allows the researchers to calculate how rapidly the patient is breathing.

The three monochrome cameras each filter a different wavelength of light — 670, 810, and 880 nanometers. These wavelengths allow the researchers to measure the slight color changes that result when hemoglobin in blood cells binds to oxygen and flows through blood vessels. The researchers’ algorithm uses these measurements to calculate both pulse rate and blood oxygen saturation.

“We didn’t really develop new technology to do the measurements,” Huang says. “What we did is integrate them together very specifically for the Covid application, to analyze different vital signs at the same time.”

Continuous monitoring

In this study, the researchers performed the measurements on healthy volunteers, and they are now making plans to test their robotic approach in people who are showing symptoms of Covid-19, in a hospital emergency department.

While in the near term, the researchers plan to focus on triage applications, in the longer term, they envision that the robots could be deployed in patients’ hospital rooms. This would allow the robots to continuously monitor patients and also allow doctors to check on them, via tablet, without having to enter the room. Both applications would require approval from the U.S. Food and Drug Administration.

The research was funded by the MIT Department of Mechanical Engineering and the Karl van Tassel (1925) Career Development Professorship, and Boston Dynamics.

Page 4 of 12
1 2 3 4 5 6 12