Page 2 of 12
1 2 3 4 12

A step toward safe and reliable autopilots for flying

MIT researchers developed a machine-learning technique that can autonomously drive a car or fly a plane through a very difficult “stabilize-avoid” scenario, in which the vehicle must stabilize its trajectory to arrive at and stay within some goal region, while avoiding obstacles. Image: Courtesy of the researchers

By Adam Zewe | MIT News Office

In the film “Top Gun: Maverick, Maverick, played by Tom Cruise, is charged with training young pilots to complete a seemingly impossible mission — to fly their jets deep into a rocky canyon, staying so low to the ground they cannot be detected by radar, then rapidly climb out of the canyon at an extreme angle, avoiding the rock walls. Spoiler alert: With Maverick’s help, these human pilots accomplish their mission.

A machine, on the other hand, would struggle to complete the same pulse-pounding task. To an autonomous aircraft, for instance, the most straightforward path toward the target is in conflict with what the machine needs to do to avoid colliding with the canyon walls or staying undetected. Many existing AI methods aren’t able to overcome this conflict, known as the stabilize-avoid problem, and would be unable to reach their goal safely.

MIT researchers have developed a new technique that can solve complex stabilize-avoid problems better than other methods. Their machine-learning approach matches or exceeds the safety of existing methods while providing a tenfold increase in stability, meaning the agent reaches and remains stable within its goal region.

In an experiment that would make Maverick proud, their technique effectively piloted a simulated jet aircraft through a narrow corridor without crashing into the ground. 

“This has been a longstanding, challenging problem. A lot of people have looked at it but didn’t know how to handle such high-dimensional and complex dynamics,” says Chuchu Fan, the Wilson Assistant Professor of Aeronautics and Astronautics, a member of the Laboratory for Information and Decision Systems (LIDS), and senior author of a new paper on this technique.

Fan is joined by lead author Oswin So, a graduate student. The paper will be presented at the Robotics: Science and Systems conference.

The stabilize-avoid challenge

Many approaches tackle complex stabilize-avoid problems by simplifying the system so they can solve it with straightforward math, but the simplified results often don’t hold up to real-world dynamics.

More effective techniques use reinforcement learning, a machine-learning method where an agent learns by trial-and-error with a reward for behavior that gets it closer to a goal. But there are really two goals here — remain stable and avoid obstacles — and finding the right balance is tedious.

The MIT researchers broke the problem down into two steps. First, they reframe the stabilize-avoid problem as a constrained optimization problem. In this setup, solving the optimization enables the agent to reach and stabilize to its goal, meaning it stays within a certain region. By applying constraints, they ensure the agent avoids obstacles, So explains. 

Then for the second step, they reformulate that constrained optimization problem into a mathematical representation known as the epigraph form and solve it using a deep reinforcement learning algorithm. The epigraph form lets them bypass the difficulties other methods face when using reinforcement learning. 

“But deep reinforcement learning isn’t designed to solve the epigraph form of an optimization problem, so we couldn’t just plug it into our problem. We had to derive the mathematical expressions that work for our system. Once we had those new derivations, we combined them with some existing engineering tricks used by other methods,” So says.

No points for second place

To test their approach, they designed a number of control experiments with different initial conditions. For instance, in some simulations, the autonomous agent needs to reach and stay inside a goal region while making drastic maneuvers to avoid obstacles that are on a collision course with it.

This video shows how the researchers used their technique to effectively fly a simulated jet aircraft in a scenario where it had to stabilize to a target near the ground while maintaining a very low altitude and staying within a narrow flight corridor. Courtesy of the researchers.

When compared with several baselines, their approach was the only one that could stabilize all trajectories while maintaining safety. To push their method even further, they used it to fly a simulated jet aircraft in a scenario one might see in a “Top Gun” movie. The jet had to stabilize to a target near the ground while maintaining a very low altitude and staying within a narrow flight corridor.

This simulated jet model was open-sourced in 2018 and had been designed by flight control experts as a testing challenge. Could researchers create a scenario that their controller could not fly? But the model was so complicated it was difficult to work with, and it still couldn’t handle complex scenarios, Fan says.

The MIT researchers’ controller was able to prevent the jet from crashing or stalling while stabilizing to the goal far better than any of the baselines.

In the future, this technique could be a starting point for designing controllers for highly dynamic robots that must meet safety and stability requirements, like autonomous delivery drones. Or it could be implemented as part of larger system. Perhaps the algorithm is only activated when a car skids on a snowy road to help the driver safely navigate back to a stable trajectory.

Navigating extreme scenarios that a human wouldn’t be able to handle is where their approach really shines, So adds.

“We believe that a goal we should strive for as a field is to give reinforcement learning the safety and stability guarantees that we will need to provide us with assurance when we deploy these controllers on mission-critical systems. We think this is a promising first step toward achieving that goal,” he says.

Moving forward, the researchers want to enhance their technique so it is better able to take uncertainty into account when solving the optimization. They also want to investigate how well the algorithm works when deployed on hardware, since there will be mismatches between the dynamics of the model and those in the real world.

“Professor Fan’s team has improved reinforcement learning performance for dynamical systems where safety matters. Instead of just hitting a goal, they create controllers that ensure the system can reach its target safely and stay there indefinitely,” says Stanley Bak, an assistant professor in the Department of Computer Science at Stony Brook University, who was not involved with this research. “Their improved formulation allows the successful generation of safe controllers for complex scenarios, including a 17-state nonlinear jet aircraft model designed in part by researchers from the Air Force Research Lab (AFRL), which incorporates nonlinear differential equations with lift and drag tables.”

The work is funded, in part, by MIT Lincoln Laboratory under the Safety in Aerobatic Flight Regimes program.


Helping robots handle fluids

Researchers created “FluidLab,” a simulation environment with a diverse set of manipulation tasks involving complex fluid dynamics. Image: Alex Shipps/MIT CSAIL via Midjourney

Imagine you’re enjoying a picnic by a riverbank on a windy day. A gust of wind accidentally catches your paper napkin and lands on the water’s surface, quickly drifting away from you. You grab a nearby stick and carefully agitate the water to retrieve it, creating a series of small waves. These waves eventually push the napkin back toward the shore, so you grab it. In this scenario, the water acts as a medium for transmitting forces, enabling you to manipulate the position of the napkin without direct contact.

Humans regularly engage with various types of fluids in their daily lives, but doing so has been a formidable and elusive goal for current robotic systems. Hand you a latte? A robot can do that. Make it? That’s going to require a bit more nuance. 

FluidLab, a new simulation tool from researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), enhances robot learning for complex fluid manipulation tasks like making latte art, ice cream, and even manipulating air. The virtual environment offers a versatile collection of intricate fluid handling challenges, involving both solids and liquids, and multiple fluids simultaneously. FluidLab supports modeling solid, liquid, and gas, including elastic, plastic, rigid objects, Newtonian and non-Newtonian liquids, and smoke and air. 

At the heart of FluidLab lies FluidEngine, an easy-to-use physics simulator capable of seamlessly calculating and simulating various materials and their interactions, all while harnessing the power of graphics processing units (GPUs) for faster processing. The engine is “differential,” meaning the simulator can incorporate physics knowledge for a more realistic physical world model, leading to more efficient learning and planning for robotic tasks. In contrast, most existing reinforcement learning methods lack that world model that just depends on trial and error. This enhanced capability, say the researchers, lets users experiment with robot learning algorithms and toy with the boundaries of current robotic manipulation abilities.

To set the stage, the researchers tested said robot learning algorithms using FluidLab, discovering and overcoming unique challenges in fluid systems. By developing clever optimization methods, they’ve been able to transfer these learnings from simulations to real-world scenarios effectively. 

“Imagine a future where a household robot effortlessly assists you with daily tasks, like making coffee, preparing breakfast, or cooking dinner. These tasks involve numerous fluid manipulation challenges. Our benchmark is a first step towards enabling robots to master these skills, benefiting households and workplaces alike,” says visiting researcher at MIT CSAIL and research scientist at the MIT-IBM Watson AI Lab Chuang Gan, the senior author on a new paper about the research. “For instance, these robots could reduce wait times and enhance customer experiences in busy coffee shops. FluidEngine is, to our knowledge, the first-of-its-kind physics engine that supports a wide range of materials and couplings while being fully differentiable. With our standardized fluid manipulation tasks, researchers can evaluate robot learning algorithms and push the boundaries of today’s robotic manipulation capabilities.”

Fluid fantasia

Over the past few decades, scientists in the robotic manipulation domain have mainly focused on manipulating rigid objects, or on very simplistic fluid manipulation tasks like pouring water. Studying these manipulation tasks involving fluids in the real world can also be an unsafe and costly endeavor. 

With fluid manipulation, it’s not always just about fluids, though. In many tasks, such as creating the perfect ice cream swirl, mixing solids into liquids, or paddling through the water to move objects, it’s a dance of interactions between fluids and various other materials. Simulation environments must support “coupling,” or how two different material properties interact. Fluid manipulation tasks usually require pretty fine-grained precision, with delicate interactions and handling of materials, setting them apart from straightforward tasks like pushing a block or opening a bottle. 

FluidLab’s simulator can quickly calculate how different materials interact with each other. 

Helping out the GPUs is “Taichi,” a domain-specific language embedded in Python. The system can compute gradients (rates of change in environment configurations with respect to the robot’s actions) for different material types and their interactions (couplings) with one another. This precise information can be used to fine-tune the robot’s movements for better performance. As a result, the simulator allows for faster and more efficient solutions, setting it apart from its counterparts.

The 10 tasks the team put forth fell into two categories: using fluids to manipulate hard-to-reach objects, and directly manipulating fluids for specific goals. Examples included separating liquids, guiding floating objects, transporting items with water jets, mixing liquids, creating latte art, shaping ice cream, and controlling air circulation. 

“The simulator works similarly to how humans use their mental models to predict the consequences of their actions and make informed decisions when manipulating fluids. This is a significant advantage of our simulator compared to others,” says Carnegie Mellon University PhD student Zhou Xian, another author on the paper. “While other simulators primarily support reinforcement learning, ours supports reinforcement learning and allows for more efficient optimization techniques. Utilizing the gradients provided by the simulator supports highly efficient policy search, making it a more versatile and effective tool.”

Next steps

FluidLab’s future looks bright. The current work attempted to transfer trajectories optimized in simulation to real-world tasks directly in an open-loop manner. For next steps, the team is working to develop a closed-loop policy in simulation that takes as input the state or the visual observations of the environments and performs fluid manipulation tasks in real time, and then transfers the learned policies in real-world scenes.

The platform is publicly publicly available, and researchers hope it will benefit future studies in developing better methods for solving complex fluid manipulation tasks.

“Humans interact with fluids in everyday tasks, including pouring and mixing liquids (coffee, yogurts, soups, batter), washing and cleaning with water, and more,” says University of Maryland computer science professor Ming Lin, who was not involved in the work. “For robots to assist humans and serve in similar capacities for day-to-day tasks, novel techniques for interacting and handling various liquids of different properties (e.g. viscosity and density of materials) would be needed and remains a major computational challenge for real-time autonomous systems. This work introduces the first comprehensive physics engine, FluidLab, to enable modeling of diverse, complex fluids and their coupling with other objects and dynamical systems in the environment. The mathematical formulation of ‘differentiable fluids’ as presented in the paper makes it possible for integrating versatile fluid simulation as a network layer in learning-based algorithms and neural network architectures for intelligent systems to operate in real-world applications.”

Gan and Xian wrote the paper alongside Hsiao-Yu Tung a postdoc in the MIT Department of Brain and Cognitive Sciences; Antonio Torralba, an MIT professor of electrical engineering and computer science and CSAIL principal investigator; Dartmouth College Assistant Professor Bo Zhu, Columbia University PhD student Zhenjia Xu, and CMU Assistant Professor Katerina Fragkiadaki. The team’s research is supported by the MIT-IBM Watson AI Lab, Sony AI, a DARPA Young Investigator Award, an NSF CAREER award, an AFOSR Young Investigator Award, DARPA Machine Common Sense, and the National Science Foundation.

The research was presented at the International Conference on Learning Representations earlier this month.

Miniscule device could help preserve the battery life of tiny sensors

Researchers from MIT and elsewhere have built a wake-up receiver that communicates using terahertz waves, which enabled them to produce a chip more than 10 times smaller than similar devices. Their receiver, which also includes authentication to protect it from a certain type of attack, could help preserve the battery life of tiny sensors or robots. Image: Jose-Luis Olivares/MIT with figure courtesy of the researchers

By Adam Zewe | MIT News Office

Scientists are striving to develop ever-smaller internet-of-things devices, like sensors tinier than a fingertip that could make nearly any object trackable. These diminutive sensors have miniscule batteries which are often nearly impossible to replace, so engineers incorporate wake-up receivers that keep devices in low-power “sleep” mode when not in use, preserving battery life.

Researchers at MIT have developed a new wake-up receiver that is less than one-tenth the size of previous devices and consumes only a few microwatts of power. Their receiver also incorporates a low-power, built-in authentication system, which protects the device from a certain type of attack that could quickly drain its battery.

Many common types of wake-up receivers are built on the centimeter scale since their antennas must be proportional to the size of the radio waves they use to communicate. Instead, the MIT team built a receiver that utilizes terahertz waves, which are about one-tenth the length of radio waves. Their chip is barely more than 1 square millimeter in size. 

They used their wake-up receiver to demonstrate effective, wireless communication with a signal source that was several meters away, showcasing a range that would enable their chip to be used in miniaturized sensors.

For instance, the wake-up receiver could be incorporated into microrobots that monitor environmental changes in areas that are either too small or hazardous for other robots to reach. Also, since the device uses terahertz waves, it could be utilized in emerging applications, such as field-deployable radio networks that work as swarms to collect localized data.

“By using terahertz frequencies, we can make an antenna that is only a few hundred micrometers on each side, which is a very small size. This means we can integrate these antennas to the chip, creating a fully integrated solution. Ultimately, this enabled us to build a very small wake-up receiver that could be attached to tiny sensors or radios,” says Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on the wake-up receiver.

Lee wrote the paper with his co-advisors and senior authors Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, who leads the Energy-Efficient Circuits and Systems Group, and Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group in the Research Laboratory of Electronics; as well as others at MIT, the Indian Institute of Science, and Boston University. The research is being presented at the IEEE Custom Integrated Circuits Conference. 

Scaling down the receiver

Terahertz waves, found on the electromagnetic spectrum between microwaves and infrared light, have very high frequencies and travel much faster than radio waves. Sometimes called “pencil beams,” terahertz waves travel in a more direct path than other signals, which makes them more secure, Lee explains.

However, the waves have such high frequencies that terahertz receivers often multiply the terahertz signal by another signal to alter the frequency, a process known as frequency mixing modulation. Terahertz mixing consumes a great deal of power.

Instead, Lee and his collaborators developed a zero-power-consumption detector that can detect terahertz waves without the need for frequency mixing. The detector uses a pair of tiny transistors as antennas, which consume very little power.

Even with both antennas on the chip, their wake-up receiver was only 1.54 square millimeters in size and consumed less than 3 microwatts of power. This dual-antenna setup maximizes performance and makes it easier to read signals.

Once received, their chip amplifies a terahertz signal and then converts analog data into a digital signal for processing. This digital signal carries a token, which is a string of bits (0s and 1s). If the token corresponds to the wake-up receiver’s token, it will activate the device.

Ramping up security

In most wake-up receivers, the same token is reused multiple times, so an eavesdropping attacker could figure out what it is. Then the hacker could send a signal that would activate the device over and over again, using what is called a denial-of-sleep attack.

“With a wake-up receiver, the lifetime of a device could be improved from one day to one month, for instance, but an attacker could use a denial-of-sleep attack to drain that entire battery life in even less than a day. That is why we put authentication into our wake-up receiver,” he explains.

They added an authentication block that utilizes an algorithm to randomize the device’s token each time, using a key that is shared with trusted senders. This key acts like a password — if a sender knows the password, they can send a signal with the right token. The researchers do this using a technique known as lightweight cryptography, which ensures the entire authentication process only consumes a few extra nanowatts of power. 

They tested their device by sending terahertz signals to the wake-up receiver as they increased the distance between the chip and the terahertz source. In this way, they tested the sensitivity of their receiver — the minimum signal power needed for the device to successfully detect a signal. Signals that travel farther have less power.

“We achieved 5- to 10-meter longer distance demonstrations than others, using a device with a very small size and microwatt level power consumption,” Lee says.

But to be most effective, terahertz waves need to hit the detector dead-on. If the chip is at an angle, some of the signal will be lost. So, the researchers paired their device with a terahertz beam-steerable array, recently developed by the Han group, to precisely direct the terahertz waves. Using this technique, communication could be sent to multiple chips with minimal signal loss.

In the future, Lee and his collaborators want to tackle this problem of signal degradation. If they can find a way to maintain signal strength when receiver chips move or tilt slightly, they could increase the performance of these devices. They also want to demonstrate their wake-up receiver in very small sensors and fine-tune the technology for use in real-world devices.

“We have developed a rich technology portfolio for future millimeter-sized sensing, tagging, and authentication platforms, including terahertz backscattering, energy harvesting, and electrical beam steering and focusing. Now, this portfolio is more complete with Eunseok’s first-ever terahertz wake-up receiver, which is critical to save the extremely limited energy available on those mini platforms,” Han says.

Additional co-authors include Muhammad Ibrahim Wasiq Khan PhD ’22; Xibi Chen, an EECS graduate student; Ustav Banerjee PhD ’21, an assistant professor at the Indian Institute of Science; Nathan Monroe PhD ’22; and Rabia Tugce Yazicigil, an assistant professor of electrical and computer engineering at Boston University.

Drones navigate unseen environments with liquid neural networks

Makram Chahine, a PhD student in electrical engineering and computer science and an MIT CSAIL affiliate, leads a drone used to test liquid neural networks. Photo: Mike Grimmett/MIT CSAIL

By Rachel Gordon | MIT CSAIL

In the vast, expansive skies where birds once ruled supreme, a new crop of aviators is taking flight. These pioneers of the air are not living creatures, but rather a product of deliberate innovation: drones. But these aren’t your typical flying bots, humming around like mechanical bees. Rather, they’re avian-inspired marvels that soar through the sky, guided by liquid neural networks to navigate ever-changing and unseen environments with precision and ease.

Inspired by the adaptable nature of organic brains, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a method for robust flight navigation agents to master vision-based fly-to-target tasks in intricate, unfamiliar environments. The liquid neural networks, which can continuously adapt to new data inputs, showed prowess in making reliable decisions in unknown domains like forests, urban landscapes, and environments with added noise, rotation, and occlusion. These adaptable models, which outperformed many state-of-the-art counterparts in navigation tasks, could enable potential real-world drone applications like search and rescue, delivery, and wildlife monitoring.

The researchers’ recent study, published in Science Robotics, details how this new breed of agents can adapt to significant distribution shifts, a long-standing challenge in the field. The team’s new class of machine-learning algorithms, however, captures the causal structure of tasks from high-dimensional, unstructured data, such as pixel inputs from a drone-mounted camera. These networks can then extract crucial aspects of a task (i.e., understand the task at hand) and ignore irrelevant features, allowing acquired navigation skills to transfer targets seamlessly to new environments.

Drones navigate unseen environments with liquid neural networks.

“We are thrilled by the immense potential of our learning-based control approach for robots, as it lays the groundwork for solving problems that arise when training in one environment and deploying in a completely distinct environment without additional training,” says Daniela Rus, CSAIL director and the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. “Our experiments demonstrate that we can effectively teach a drone to locate an object in a forest during summer, and then deploy the model in winter, with vastly different surroundings, or even in urban settings, with varied tasks such as seeking and following. This adaptability is made possible by the causal underpinnings of our solutions. These flexible algorithms could one day aid in decision-making based on data streams that change over time, such as medical diagnosis and autonomous driving applications.”

A daunting challenge was at the forefront: Do machine-learning systems understand the task they are given from data when flying drones to an unlabeled object? And, would they be able to transfer their learned skill and task to new environments with drastic changes in scenery, such as flying from a forest to an urban landscape? What’s more, unlike the remarkable abilities of our biological brains, deep learning systems struggle with capturing causality, frequently over-fitting their training data and failing to adapt to new environments or changing conditions. This is especially troubling for resource-limited embedded systems, like aerial drones, that need to traverse varied environments and respond to obstacles instantaneously. 

The liquid networks, in contrast, offer promising preliminary indications of their capacity to address this crucial weakness in deep learning systems. The team’s system was first trained on data collected by a human pilot, to see how they transferred learned navigation skills to new environments under drastic changes in scenery and conditions. Unlike traditional neural networks that only learn during the training phase, the liquid neural net’s parameters can change over time, making them not only interpretable, but more resilient to unexpected or noisy data. 

In a series of quadrotor closed-loop control experiments, the drones underwent range tests, stress tests, target rotation and occlusion, hiking with adversaries, triangular loops between objects, and dynamic target tracking. They tracked moving targets, and executed multi-step loops between objects in never-before-seen environments, surpassing performance of other cutting-edge counterparts. 

The team believes that the ability to learn from limited expert data and understand a given task while generalizing to new environments could make autonomous drone deployment more efficient, cost-effective, and reliable. Liquid neural networks, they noted, could enable autonomous air mobility drones to be used for environmental monitoring, package delivery, autonomous vehicles, and robotic assistants. 

“The experimental setup presented in our work tests the reasoning capabilities of various deep learning systems in controlled and straightforward scenarios,” says MIT CSAIL Research Affiliate Ramin Hasani. “There is still so much room left for future research and development on more complex reasoning challenges for AI systems in autonomous navigation applications, which has to be tested before we can safely deploy them in our society.”

“Robust learning and performance in out-of-distribution tasks and scenarios are some of the key problems that machine learning and autonomous robotic systems have to conquer to make further inroads in society-critical applications,” says Alessio Lomuscio, professor of AI safety in the Department of Computing at Imperial College London. “In this context, the performance of liquid neural networks, a novel brain-inspired paradigm developed by the authors at MIT, reported in this study is remarkable. If these results are confirmed in other experiments, the paradigm here developed will contribute to making AI and robotic systems more reliable, robust, and efficient.”

Clearly, the sky is no longer the limit, but rather a vast playground for the boundless possibilities of these airborne marvels. 

Hasani and PhD student Makram Chahine; Patrick Kao ’22, MEng ’22; and PhD student Aaron Ray SM ’21 wrote the paper with Ryan Shubert ’20, MEng ’22; MIT postdocs Mathias Lechner and Alexander Amini; and Daniela Rus.

This research was supported, in part, by Schmidt Futures, the U.S. Air Force Research Laboratory, the U.S. Air Force Artificial Intelligence Accelerator, and the Boeing Co.

Robotic hand can identify objects with just one grasp

MIT researchers developed a soft-rigid robotic finger that incorporates powerful sensors along its entire length, enabling them to produce a robotic hand that could accurately identify objects after only one grasp. Image: Courtesy of the researchers

By Adam Zewe | MIT News Office

Inspired by the human finger, MIT researchers have developed a robotic hand that uses high-resolution touch sensing to accurately identify an object after grasping it just one time.

Many robotic hands pack all their powerful sensors into the fingertips, so an object must be in full contact with those fingertips to be identified, which can take multiple grasps. Other designs use lower-resolution sensors spread along the entire finger, but these don’t capture as much detail, so multiple regrasps are often required.

Instead, the MIT team built a robotic finger with a rigid skeleton encased in a soft outer layer that has multiple high-resolution sensors incorporated under its transparent “skin.” The sensors, which use a camera and LEDs to gather visual information about an object’s shape, provide continuous sensing along the finger’s entire length. Each finger captures rich data on many parts of an object simultaneously.

Using this design, the researchers built a three-fingered robotic hand that could identify objects after only one grasp, with about 85 percent accuracy. The rigid skeleton makes the fingers strong enough to pick up a heavy item, such as a drill, while the soft skin enables them to securely grasp a pliable item, like an empty plastic water bottle, without crushing it.

These soft-rigid fingers could be especially useful in an at-home-care robot designed to interact with an elderly individual. The robot could lift a heavy item off a shelf with the same hand it uses to help the individual take a bath.

“Having both soft and rigid elements is very important in any hand, but so is being able to perform great sensing over a really large area, especially if we want to consider doing very complicated manipulation tasks like what our own hands can do. Our goal with this work was to combine all the things that make our human hands so good into a robotic finger that can do tasks other robotic fingers can’t currently do,” says mechanical engineering graduate student Sandra Liu, co-lead author of a research paper on the robotic finger.

Liu wrote the paper with co-lead author and mechanical engineering undergraduate student Leonardo Zamora Yañez and her advisor, Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the RoboSoft Conference.

A human-inspired finger

The robotic finger is comprised of a rigid, 3D-printed endoskeleton that is placed in a mold and encased in a transparent silicone “skin.” Making the finger in a mold removes the need for fasteners or adhesives to hold the silicone in place.

The researchers designed the mold with a curved shape so the robotic fingers are slightly curved when at rest, just like human fingers.

“Silicone will wrinkle when it bends, so we thought that if we have the finger molded in this curved position, when you curve it more to grasp an object, you won’t induce as many wrinkles. Wrinkles are good in some ways — they can help the finger slide along surfaces very smoothly and easily — but we didn’t want wrinkles that we couldn’t control,” Liu says.

The endoskeleton of each finger contains a pair of detailed touch sensors, known as GelSight sensors, embedded into the top and middle sections, underneath the transparent skin. The sensors are placed so the range of the cameras overlaps slightly, giving the finger continuous sensing along its entire length.

The GelSight sensor, based on technology pioneered in the Adelson group, is composed of a camera and three colored LEDs. When the finger grasps an object, the camera captures images as the colored LEDs illuminate the skin from the inside.

Image: Courtesy of the researchers

Using the illuminated contours that appear in the soft skin, an algorithm performs backward calculations to map the contours on the grasped object’s surface. The researchers trained a machine-learning model to identify objects using raw camera image data.

As they fine-tuned the finger fabrication process, the researchers ran into several obstacles.

First, silicone has a tendency to peel off surfaces over time. Liu and her collaborators found they could limit this peeling by adding small curves along the hinges between the joints in the endoskeleton.

When the finger bends, the bending of the silicone is distributed along the tiny curves, which reduces stress and prevents peeling. They also added creases to the joints so the silicone is not squashed as much when the finger bends.

While troubleshooting their design, the researchers realized wrinkles in the silicone prevent the skin from ripping.

“The usefulness of the wrinkles was an accidental discovery on our part. When we synthesized them on the surface, we found that they actually made the finger more durable than we expected,” she says.

Getting a good grasp

Once they had perfected the design, the researchers built a robotic hand using two fingers arranged in a Y pattern with a third finger as an opposing thumb. The hand captures six images when it grasps an object (two from each finger) and sends those images to a machine-learning algorithm which uses them as inputs to identify the object.

Because the hand has tactile sensing covering all of its fingers, it can gather rich tactile data from a single grasp.

“Although we have a lot of sensing in the fingers, maybe adding a palm with sensing would help it make tactile distinctions even better,” Liu says.

In the future, the researchers also want to improve the hardware to reduce the amount of wear and tear in the silicone over time and add more actuation to the thumb so it can perform a wider variety of tasks.


This work was supported, in part, by the Toyota Research Institute, the Office of Naval Research, and the SINTEF BIFROST project.

A four-legged robotic system for playing soccer on various terrains

Researchers created DribbleBot, a system for in-the-wild dribbling on diverse natural terrains including sand, gravel, mud, and snow using onboard sensing and computing. In addition to these football feats, such robots may someday aid humans in search-and-rescue missions. Photo: Mike Grimmett/MIT CSAIL

By Rachel Gordon | MIT CSAIL

If you’ve ever played soccer with a robot, it’s a familiar feeling. Sun glistens down on your face as the smell of grass permeates the air. You look around. A four-legged robot is hustling toward you, dribbling with determination. 

While the bot doesn’t display a Lionel Messi-like level of ability, it’s an impressive in-the-wild dribbling system nonetheless. Researchers from MIT’s Improbable Artificial Intelligence Lab, part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), have developed a legged robotic system that can dribble a soccer ball under the same conditions as humans. The bot used a mixture of onboard sensing and computing to traverse different natural terrains such as sand, gravel, mud, and snow, and adapt to their varied impact on the ball’s motion. Like every committed athlete, “DribbleBot” could get up and recover the ball after falling. 

Programming robots to play soccer has been an active research area for some time. However, the team wanted to automatically learn how to actuate the legs during dribbling, to enable the discovery of hard-to-script skills for responding to diverse terrains like snow, gravel, sand, grass, and pavement. Enter, simulation. 

A robot, ball, and terrain are inside the simulation — a digital twin of the natural world. You can load in the bot and other assets and set physics parameters, and then it handles the forward simulation of the dynamics from there. Four thousand versions of the robot are simulated in parallel in real time, enabling data collection 4,000 times faster than using just one robot. That’s a lot of data. 

Video: MIT CSAIL

The robot starts without knowing how to dribble the ball — it just receives a reward when it does, or negative reinforcement when it messes up. So, it’s essentially trying to figure out what sequence of forces it should apply with its legs. “One aspect of this reinforcement learning approach is that we must design a good reward to facilitate the robot learning a successful dribbling behavior,” says MIT PhD student Gabe Margolis, who co-led the work along with Yandong Ji, research assistant in the Improbable AI Lab. “Once we’ve designed that reward, then it’s practice time for the robot: In real time, it’s a couple of days, and in the simulator, hundreds of days. Over time it learns to get better and better at manipulating the soccer ball to match the desired velocity.” 

The bot could also navigate unfamiliar terrains and recover from falls due to a recovery controller the team built into its system. This controller lets the robot get back up after a fall and switch back to its dribbling controller to continue pursuing the ball, helping it handle out-of-distribution disruptions and terrains. 

“If you look around today, most robots are wheeled. But imagine that there’s a disaster scenario, flooding, or an earthquake, and we want robots to aid humans in the search-and-rescue process. We need the machines to go over terrains that aren’t flat, and wheeled robots can’t traverse those landscapes,” says Pulkit Agrawal, MIT professor, CSAIL principal investigator, and director of Improbable AI Lab.” The whole point of studying legged robots is to go terrains outside the reach of current robotic systems,” he adds. “Our goal in developing algorithms for legged robots is to provide autonomy in challenging and complex terrains that are currently beyond the reach of robotic systems.” 

The fascination with robot quadrupeds and soccer runs deep — Canadian professor Alan Mackworth first noted the idea in a paper entitled “On Seeing Robots,” presented at VI-92, 1992. Japanese researchers later organized a workshop on “Grand Challenges in Artificial Intelligence,” which led to discussions about using soccer to promote science and technology. The project was launched as the Robot J-League a year later, and global fervor quickly ensued. Shortly after that, “RoboCup” was born. 

Compared to walking alone, dribbling a soccer ball imposes more constraints on DribbleBot’s motion and what terrains it can traverse. The robot must adapt its locomotion to apply forces to the ball to  dribble. The interaction between the ball and the landscape could be different than the interaction between the robot and the landscape, such as thick grass or pavement. For example, a soccer ball will experience a drag force on grass that is not present on pavement, and an incline will apply an acceleration force, changing the ball’s typical path. However, the bot’s ability to traverse different terrains is often less affected by these differences in dynamics — as long as it doesn’t slip — so the soccer test can be sensitive to variations in terrain that locomotion alone isn’t. 

“Past approaches simplify the dribbling problem, making a modeling assumption of flat, hard ground. The motion is also designed to be more static; the robot isn’t trying to run and manipulate the ball simultaneously,” says Ji. “That’s where more difficult dynamics enter the control problem. We tackled this by extending recent advances that have enabled better outdoor locomotion into this compound task which combines aspects of locomotion and dexterous manipulation together.”

On the hardware side, the robot has a set of sensors that let it perceive the environment, allowing it to feel where it is, “understand” its position, and “see” some of its surroundings. It has a set of actuators that lets it apply forces and move itself and objects. In between the sensors and actuators sits the computer, or “brain,” tasked with converting sensor data into actions, which it will apply through the motors. When the robot is running on snow, it doesn’t see the snow but can feel it through its motor sensors. But soccer is a trickier feat than walking — so the team leveraged cameras on the robot’s head and body for a new sensory modality of vision, in addition to the new motor skill. And then — we dribble. 

“Our robot can go in the wild because it carries all its sensors, cameras, and compute on board. That required some innovations in terms of getting the whole controller to fit onto this onboard compute,” says Margolis. “That’s one area where learning helps because we can run a lightweight neural network and train it to process noisy sensor data observed by the moving robot. This is in stark contrast with most robots today: Typically a robot arm is mounted on a fixed base and sits on a workbench with a giant computer plugged right into it. Neither the computer nor the sensors are in the robotic arm! So, the whole thing is weighty, hard to move around.”

There’s still a long way to go in making these robots as agile as their counterparts in nature, and some terrains were challenging for DribbleBot. Currently, the controller is not trained in simulated environments that include slopes or stairs. The robot isn’t perceiving the geometry of the terrain; it’s only estimating its material contact properties, like friction. If there’s a step up, for example, the robot will get stuck — it won’t be able to lift the ball over the step, an area the team wants to explore in the future. The researchers are also excited to apply lessons learned during development of DribbleBot to other tasks that involve combined locomotion and object manipulation, quickly transporting diverse objects from place to place using the legs or arms.

The research is supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the National Science Foundation Institute of Artificial Intelligence and Fundamental Interactions, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator. The paper will be presented at the 2023 IEEE International Conference on Robotics and Automation (ICRA).

Resilient bug-sized robots keep flying even after wing damage

MIT researchers have developed resilient artificial muscles that can enable insect-scale aerial robots to effectively recover flight performance after suffering severe damage. Photo: Courtesy of the researchers

By Adam Zewe | MIT News Office

Bumblebees are clumsy fliers. It is estimated that a foraging bee bumps into a flower about once per second, which damages its wings over time. Yet despite having many tiny rips or holes in their wings, bumblebees can still fly.

Aerial robots, on the other hand, are not so resilient. Poke holes in the robot’s wing motors or chop off part of its propellor, and odds are pretty good it will be grounded.

Inspired by the hardiness of bumblebees, MIT researchers have developed repair techniques that enable a bug-sized aerial robot to sustain severe damage to the actuators, or artificial muscles, that power its wings — but to still fly effectively.

They optimized these artificial muscles so the robot can better isolate defects and overcome minor damage, like tiny holes in the actuator. In addition, they demonstrated a novel laser repair method that can help the robot recover from severe damage, such as a fire that scorches the device.

Using their techniques, a damaged robot could maintain flight-level performance after one of its artificial muscles was jabbed by 10 needles, and the actuator was still able to operate after a large hole was burnt into it. Their repair methods enabled a robot to keep flying even after the researchers cut off 20 percent of its wing tip.

This could make swarms of tiny robots better able to perform tasks in tough environments, like conducting a search mission through a collapsing building or dense forest.

“We spent a lot of time understanding the dynamics of soft, artificial muscles and, through both a new fabrication method and a new understanding, we can show a level of resilience to damage that is comparable to insects,” says Kevin Chen, the D. Reid Weedon, Jr. Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper on these latest advances. “We’re very excited about this. But the insects are still superior to us, in the sense that they can lose up to 40 percent of their wing and still fly. We still have some catch-up work to do.”

Chen wrote the paper with co-lead authors Suhan Kim and Yi-Hsuan Hsiao, who are EECS graduate students; Younghoon Lee, a postdoc; Weikun “Spencer” Zhu, a graduate student in the Department of Chemical Engineering; Zhijian Ren, an EECS graduate student; and Farnaz Niroui, the EE Landsman Career Development Assistant Professor of EECS at MIT and a member of the RLE. The article appeared in Science Robotics.

Robot repair techniques

Using the repair techniques developed by MIT researchers, this microrobot can still maintain flight-level performance even after the artificial muscles that power its wings were jabbed by 10 needles and 20 percent of one wing tip was cut off. Credit: Courtesy of the researchers.

The tiny, rectangular robots being developed in Chen’s lab are about the same size and shape as a microcassette tape, though one robot weighs barely more than a paper clip. Wings on each corner are powered by dielectric elastomer actuators (DEAs), which are soft artificial muscles that use mechanical forces to rapidly flap the wings. These artificial muscles are made from layers of elastomer that are sandwiched between two razor-thin electrodes and then rolled into a squishy tube. When voltage is applied to the DEA, the electrodes squeeze the elastomer, which flaps the wing.

But microscopic imperfections can cause sparks that burn the elastomer and cause the device to fail. About 15 years ago, researchers found they could prevent DEA failures from one tiny defect using a physical phenomenon known as self-clearing. In this process, applying high voltage to the DEA disconnects the local electrode around a small defect, isolating that failure from the rest of the electrode so the artificial muscle still works.

Chen and his collaborators employed this self-clearing process in their robot repair techniques.

First, they optimized the concentration of carbon nanotubes that comprise the electrodes in the DEA. Carbon nanotubes are super-strong but extremely tiny rolls of carbon. Having fewer carbon nanotubes in the electrode improves self-clearing, since it reaches higher temperatures and burns away more easily. But this also reduces the actuator’s power density.

“At a certain point, you will not be able to get enough energy out of the system, but we need a lot of energy and power to fly the robot. We had to find the optimal point between these two constraints — optimize the self-clearing property under the constraint that we still want the robot to fly,” Chen says.

However, even an optimized DEA will fail if it suffers from severe damage, like a large hole that lets too much air into the device.

Chen and his team used a laser to overcome major defects. They carefully cut along the outer contours of a large defect with a laser, which causes minor damage around the perimeter. Then, they can use self-clearing to burn off the slightly damaged electrode, isolating the larger defect.

“In a way, we are trying to do surgery on muscles. But if we don’t use enough power, then we can’t do enough damage to isolate the defect. On the other hand, if we use too much power, the laser will cause severe damage to the actuator that won’t be clearable,” Chen says.

The team soon realized that, when “operating” on such tiny devices, it is very difficult to observe the electrode to see if they had successfully isolated a defect. Drawing on previous work, they incorporated electroluminescent particles into the actuator. Now, if they see light shining, they know that part of the actuator is operational, but dark patches mean they successfully isolated those areas.

The new research could make swarms of tiny robots better able to perform tasks in tough environments, like conducting a search mission through a collapsing building or dense forest. Photo: Courtesy of the researchers

Flight test success

Once they had perfected their techniques, the researchers conducted tests with damaged actuators — some had been jabbed by many needles while other had holes burned into them. They measured how well the robot performed in flapping wing, take-off, and hovering experiments.

Even with damaged DEAs, the repair techniques enabled the robot to maintain its flight performance, with altitude, position, and attitude errors that deviated only very slightly from those of an undamaged robot. With laser surgery, a DEA that would have been broken beyond repair was able to recover 87 percent of its performance.

“I have to hand it to my two students, who did a lot of hard work when they were flying the robot. Flying the robot by itself is very hard, not to mention now that we are intentionally damaging it,” Chen says.

These repair techniques make the tiny robots much more robust, so Chen and his team are now working on teaching them new functions, like landing on flowers or flying in a swarm. They are also developing new control algorithms so the robots can fly better, teaching the robots to control their yaw angle so they can keep a constant heading, and enabling the robots to carry a tiny circuit, with the longer-term goal of carrying its own power source.

“This work is important because small flying robots — and flying insects! — are constantly colliding with their environment. Small gusts of wind can be huge problems for small insects and robots. Thus, we need methods to increase their resilience if we ever hope to be able to use robots like this in natural environments,” says Nick Gravish, an associate professor in the Department of Mechanical and Aerospace Engineering at the University of California at San Diego, who was not involved with this research. “This paper demonstrates how soft actuation and body mechanics can adapt to damage and I think is an impressive step forward.”

This work is funded, in part, by the National Science Foundation (NSF) and a MathWorks Fellowship.


Mix-and-match kit could enable astronauts to build a menagerie of lunar exploration bots

A team of MIT engineers is designing a kit of universal robotic parts that an astronaut could easily mix and match to build different robot “species” to fit various missions on the moon. Credit: hexapod image courtesy of the researchers, edited by MIT News

By Jennifer Chu | MIT News Office

When astronauts begin to build a permanent base on the moon, as NASA plans to do in the coming years, they’ll need help. Robots could potentially do the heavy lifting by laying cables, deploying solar panels, erecting communications towers, and building habitats. But if each robot is designed for a specific action or task, a moon base could become overrun by a zoo of machines, each with its own unique parts and protocols.

To avoid a bottleneck of bots, a team of MIT engineers is designing a kit of universal robotic parts that an astronaut could easily mix and match to rapidly configure different robot “species” to fit various missions on the moon. Once a mission is completed, a robot can be disassembled and its parts used to configure a new robot to meet a different task.

The team calls the system WORMS, for the Walking Oligomeric Robotic Mobility System. The system’s parts include worm-inspired robotic limbs that an astronaut can easily snap onto a base, and that work together as a walking robot. Depending on the mission, parts can be configured to build, for instance, large “pack” bots capable of carrying heavy solar panels up a hill. The same parts could be reconfigured into six-legged spider bots that can be lowered into a lava tube to drill for frozen water.

“You could imagine a shed on the moon with shelves of worms,” says team leader George Lordos, a PhD candidate and graduate instructor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), in reference to the independent, articulated robots that carry their own motors, sensors, computer, and battery. “Astronauts could go into the shed, pick the worms they need, along with the right shoes, body, sensors and tools, and they could snap everything together, then disassemble it to make a new one. The design is flexible, sustainable, and cost-effective.”

Lordos’ team has built and demonstrated a six-legged WORMS robot. Last week, they presented their results at IEEE’s Aerospace Conference, where they also received the conference’s Best Paper Award.

MIT team members include Michael J. Brown, Kir Latyshev, Aileen Liao, Sharmi Shah, Cesar Meza, Brooke Bensche, Cynthia Cao, Yang Chen, Alex S. Miller, Aditya Mehrotra, Jacob Rodriguez, Anna Mokkapati, Tomas Cantu, Katherina Sapozhnikov, Jessica Rutledge, David Trumper, Sangbae Kim, Olivier de Weck, Jeffrey Hoffman, along with Aleks Siemenn, Cormac O’Neill, Diego Rivero, Fiona Lin, Hanfei Cui, Isabella Golemme, John Zhang, Jolie Bercow, Prajwal Mahesh, Stephanie Howe, and Zeyad Al Awwad, as well as Chiara Rissola of Carnegie Mellon University and Wendell Chun of the University of Denver.

Animal instincts

WORMS was conceived in 2022 as an answer to NASA’s Breakthrough, Innovative and Game-changing (BIG) Idea Challenge — an annual competition for university students to design, develop, and demonstrate a game-changing idea. In 2022, NASA challenged students to develop robotic systems that can move across extreme terrain, without the use of wheels.

A team from MIT’s Space Resources Workshop took up the challenge, aiming specifically for a lunar robot design that could navigate the extreme terrain of the moon’s South Pole — a landscape that is marked by thick, fluffy dust; steep, rocky slopes; and deep lava tubes. The environment also hosts “permanently shadowed” regions that could contain frozen water, which, if accessible, would be essential for sustaining astronauts.

As they mulled over ways to navigate the moon’s polar terrain, the students took inspiration from animals. In their initial brainstorming, they noted certain animals could conceptually be suited to certain missions: A spider could drop down and explore a lava tube, a line of elephants could carry heavy equipment while supporting each other down a steep slope, and a goat, tethered to an ox, could help lead the larger animal up the side of a hill as it transports an array of solar panels.

“As we were thinking of these animal inspirations, we realized that one of the simplest animals, the worm, makes similar movements as an arm, or a leg, or a backbone, or a tail,” says deputy team leader and AeroAstro graduate student Michael Brown. “And then the lightbulb went off: We could build all these animal-inspired robots using worm-like appendages.’”

The research team in Killian Court at MIT. Credit: Courtesy of the researchers

Snap on, snap off

Lordos, who is of Greek descent, helped coin WORMS, and chose the letter “O” to stand for “oligomeric,” which in Greek signifies “a few parts.”

“Our idea was that, with just a few parts, combined in different ways, you could mix and match and get all these different robots,” says AeroAstro undergraduate Brooke Bensche.

The system’s main parts include the appendage, or worm, which can be attached to a body, or chassis, via a “universal interface block” that snaps the two parts together through a twist-and-lock mechanism. The parts can be disconnected with a small tool that releases the block’s spring-loaded pins.

Appendages and bodies can also snap into accessories such as a “shoe,” which the team engineered in the shape of a wok, and a LiDAR system that can map the surroundings to help a robot navigate.

“In future iterations we hope to add more snap-on sensors and tools, such as winches, balance sensors, and drills,” says AeroAstro undergraduate Jacob Rodriguez.

The team developed software that can be tailored to coordinate multiple appendages. As a proof of concept, the team built a six-legged robot about the size of a go-cart. In the lab, they showed that once assembled, the robot’s independent limbs worked to walk over level ground. The team also showed that they could quickly assemble and disassemble the robot in the field, on a desert site in California.

In its first generation, each WORMS appendage measures about 1 meter long and weighs about 20 pounds. In the moon’s gravity, which is about one-sixth that of Earth’s, each limb would weigh about 3 pounds, which an astronaut could easily handle to build or disassemble a robot in the field. The team has planned out the specs for a larger generation with longer and slightly heavier appendages. These bigger parts could be snapped together to build “pack” bots, capable of transporting heavy payloads.

“There are many buzz words that are used to describe effective systems for future space exploration: modular, reconfigurable, adaptable, flexible, cross-cutting, et cetera,” says Kevin Kempton, an engineer at NASA’s Langley Research Center, who served as a judge for the 2022 BIG Idea Challenge. “The MIT WORMS concept incorporates all these qualities and more.”

This research was supported, in part, by NASA, MIT, the Massachusetts Space Grant, the National Science Foundation, and the Fannie and John Hertz Foundation.

Learning to compute through art

Shua Cho works on her artwork in “Introduction to Physical Computing for Artists” at the MIT Student Art Association. Photo: Sarah Bastille

By Ken Shulman | Arts at MIT

One student confesses that motors have always freaked them out. Amy Huynh, a first-year student in the MIT Technology and Policy Program, says “I just didn’t respond to the way electrical engineering and coding is usually taught.”

Huynh and her fellow students found a different way to master coding and circuits during the Independent Activities Period course Introduction to Physical Computing for Artists — a class created by Student Art Association (SAA) instructor Timothy Lee and offered for the first time last January. During the four-week course, students learned to use circuits, wiring, motors, sensors, and displays by developing their own kinetic artworks. 

“It’s a different approach to learning about art, and about circuits,” says Lee, who joined the SAA instructional staff last June after completing his MFA at Goldsmiths, University of London. “Some classes can push the technology too quickly. Here we try to take away the obstacles to learning, to create a collaborative environment, and to frame the technology in the broader concept of making an artwork. For many students, it’s a very effective way to learn.”

Lee graduated from Wesleyan University with three concurrent majors in neuroscience, biology, and studio art. “I didn’t have a lot of free time,” says Lee, who originally intended to attend medical school before deciding to follow his passion for making art. “But I benefited from studying both science and art. Just as I almost always benefited from learning from my peers. I draw on both of those experiences in designing and teaching this class.”

On this January evening, the third of four scheduled classes, Lee leads his students through an exercise to create an MVP — a minimum viable product of their art project. The MVP, he explains, serves as an artist’s proof of concept. “This is the smallest single unit that can demonstrate that your project is doable,” he says. “That you have the bare-minimum functioning hardware and software that shows your project can be scalable to your vision. Our work here is different from pure robotics or pure electronics. Here, the technology and the coding don’t need to be perfect. They need to support your aesthetic and conceptual goals. And here, these things can also be fun.”

Lee distributes various electronic items to the students according to their specific needs — wires, soldering irons, resistors, servo motors, and Arduino components. The students have already acquired a working knowledge of coding and the Arduino language in the first two class sessions. Sophomore Shua Cho is designing an evening gown bedecked with flowers that will open and close continuously. Her MVP is a cluster of three blossoms, mounted on a single post that, when raised and lowered, opens and closes the sewn blossoms. She asks Lee for help in attaching a servo motor — an electronic motor that alternates between 0, 90, and 180 degrees — to the post. Two other students, working on similar problems, immediately pull their chairs beside Cho and Lee to join the discussion. 

Shua Cho is designing an evening gown bedecked with flowers that will open and close continuously. Her minimum viable product is a cluster of three blossoms, mounted on a single post that, when raised and lowered, opens and closes the sewn blossoms. Photo: Sarah Bastille

The instructor suggests they observe the dynamics of an old-fashioned train locomotive wheel. One student calls up the image on their laptop. Then, as a group, they reach a solution for Cho — an assembly of wire and glue that will attach the servo engine to the central post, opening and closing the blossoms. It’s improvised, even inelegant. But it works, and proves that the project for the blossom-covered kinetic dress is viable.  

“This is one of the things I love about MIT,” says aeronautical and astronautical engineering senior Hannah Munguia. Her project is a pair of hands that, when triggered by a motion sensor, will applaud when anyone walks by. “People raise their hand when they don’t understand something. And other people come to help. The students here trust each other, and are willing to collaborate.”

Student Hannah Munguia (left), instructor Timothy Lee (center), and student Bryan Medina work on artwork in “Introduction to Physical Computing for Artists” at the MIT Student Art Association. Photo: Sarah Bastille

Cho, who enjoys exploring the intersection between fashion and engineering, discovered Lee’s work on Instagram long before she decided to enroll at MIT. “And now I have the chance to study with him,” says Cho, who works at Infinite — MIT’s fashion magazine — and takes classes in both mechanical engineering and design. “I find that having a creative project like this one, with a goal in mind, is the best way for me to learn. I feel like it reinforces my neural pathways, and I know it helps me retain information. I find myself walking down the street or in my room, thinking about possible solutions for this gown. It never feels like work.”

For Lee, who studied computational art during his master’s program, his course is already a successful experiment. He’d like to offer a full-length version of “Introduction to Physical Computing for Artists” during the school year. With 10 sessions instead of four, he says, students would be able to complete their projects, instead of stopping at an MVP.   

“Prior to coming to MIT, I’d only taught at art institutions,” says Lee. “Here, I needed to revise my focus, to redefine the value of art education for students who most likely were not going to pursue art as a profession. For me, the new definition was selecting a group of skills that are necessary in making this type of art, but that can also be applied to other areas and fields. Skills like sensitivity to materials, tactile dexterity, and abstract thinking. Why not learn these skills in an atmosphere that is experimental, visually based, sometimes a little uncomfortable. And why not learn that you don’t need to be an artist to make art. You just have to be excited about it.”

Custom, 3D-printed heart replicas look and pump just like the real thing

MIT engineers are hoping to help doctors tailor treatments to patients’ specific heart form and function, with a custom robotic heart. The team has developed a procedure to 3D print a soft and flexible replica of a patient’s heart. Image: Melanie Gonick, MIT

By Jennifer Chu | MIT News Office

No two hearts beat alike. The size and shape of the the heart can vary from one person to the next. These differences can be particularly pronounced for people living with heart disease, as their hearts and major vessels work harder to overcome any compromised function.

MIT engineers are hoping to help doctors tailor treatments to patients’ specific heart form and function, with a custom robotic heart. The team has developed a procedure to 3D print a soft and flexible replica of a patient’s heart. They can then control the replica’s action to mimic that patient’s blood-pumping ability.

The procedure involves first converting medical images of a patient’s heart into a three-dimensional computer model, which the researchers can then 3D print using a polymer-based ink. The result is a soft, flexible shell in the exact shape of the patient’s own heart. The team can also use this approach to print a patient’s aorta — the major artery that carries blood out of the heart to the rest of the body.

To mimic the heart’s pumping action, the team has fabricated sleeves similar to blood pressure cuffs that wrap around a printed heart and aorta. The underside of each sleeve resembles precisely patterned bubble wrap. When the sleeve is connected to a pneumatic system, researchers can tune the outflowing air to rhythmically inflate the sleeve’s bubbles and contract the heart, mimicking its pumping action. 

The researchers can also inflate a separate sleeve surrounding a printed aorta to constrict the vessel. This constriction, they say, can be tuned to mimic aortic stenosis — a condition in which the aortic valve narrows, causing the heart to work harder to force blood through the body.

Doctors commonly treat aortic stenosis by surgically implanting a synthetic valve designed to widen the aorta’s natural valve. In the future, the team says that doctors could potentially use their new procedure to first print a patient’s heart and aorta, then implant a variety of valves into the printed model to see which design results in the best function and fit for that particular patient. The heart replicas could also be used by research labs and the medical device industry as realistic platforms for testing therapies for various types of heart disease.

“All hearts are different,” says Luca Rosalia, a graduate student in the MIT-Harvard Program in Health Sciences and Technology. “There are massive variations, especially when patients are sick. The advantage of our system is that we can recreate not just the form of a patient’s heart, but also its function in both physiology and disease.”

Rosalia and his colleagues report their results in a study appearing in Science Robotics. MIT co-authors include Caglar Ozturk, Debkalpa Goswami, Jean Bonnemain, Sophie Wang, and Ellen Roche, along with Benjamin Bonner of Massachusetts General Hospital, James Weaver of Harvard University, and Christopher Nguyen, Rishi Puri, and Samir Kapadia at the Cleveland Clinic in Ohio.

Print and pump

In January 2020, team members, led by mechanical engineering professor Ellen Roche, developed a “biorobotic hybrid heart” — a general replica of a heart, made from synthetic muscle containing small, inflatable cylinders, which they could control to mimic the contractions of a real beating heart.

Shortly after those efforts, the Covid-19 pandemic forced Roche’s lab, along with most others on campus, to temporarily close. Undeterred, Rosalia continued tweaking the heart-pumping design at home.

“I recreated the whole system in my dorm room that March,” Rosalia recalls.

Months later, the lab reopened, and the team continued where it left off, working to improve the control of the heart-pumping sleeve, which they tested in animal and computational models. They then expanded their approach to develop sleeves and heart replicas that are specific to individual patients. For this, they turned to 3D printing.

“There is a lot of interest in the medical field in using 3D printing technology to accurately recreate patient anatomy for use in preprocedural planning and training,” notes Wang, who is a vascular surgery resident at Beth Israel Deaconess Medical Center in Boston.

An inclusive design

In the new study, the team took advantage of 3D printing to produce custom replicas of actual patients’ hearts. They used a polymer-based ink that, once printed and cured, can squeeze and stretch, similarly to a real beating heart.

As their source material, the researchers used medical scans of 15 patients diagnosed with aortic stenosis. The team converted each patient’s images into a three-dimensional computer model of the patient’s left ventricle (the main pumping chamber of the heart) and aorta. They fed this model into a 3D printer to generate a soft, anatomically accurate shell of both the ventricle and vessel.

The action of the soft, robotic models can be controlled to mimic the patient’s blood-pumping ability. Image: Melanie Gonick, MIT

The team also fabricated sleeves to wrap around the printed forms. They tailored each sleeve’s pockets such that, when wrapped around their respective forms and connected to a small air pumping system, the sleeves could be tuned separately to realistically contract and constrict the printed models.

The researchers showed that for each model heart, they could accurately recreate the same heart-pumping pressures and flows that were previously measured in each respective patient.

“Being able to match the patients’ flows and pressures was very encouraging,” Roche says. “We’re not only printing the heart’s anatomy, but also replicating its mechanics and physiology. That’s the part that we get excited about.”

Going a step further, the team aimed to replicate some of the interventions that a handful of the patients underwent, to see whether the printed heart and vessel responded in the same way. Some patients had received valve implants designed to widen the aorta. Roche and her colleagues implanted similar valves in the printed aortas modeled after each patient. When they activated the printed heart to pump, they observed that the implanted valve produced similarly improved flows as in actual patients following their surgical implants.

Finally, the team used an actuated printed heart to compare implants of different sizes, to see which would result in the best fit and flow — something they envision clinicians could potentially do for their patients in the future.

“Patients would get their imaging done, which they do anyway, and we would use that to make this system, ideally within the day,” says co-author Nguyen. “Once it’s up and running, clinicians could test different valve types and sizes and see which works best, then use that to implant.”

Ultimately, Roche says the patient-specific replicas could help develop and identify ideal treatments for individuals with unique and challenging cardiac geometries.

“Designing inclusively for a large range of anatomies, and testing interventions across this range, may increase the addressable target population for minimally invasive procedures,” Roche says.

This research was supported, in part, by the National Science Foundation, the National Institutes of Health, and the National Heart Lung Blood Institute.


Learning challenges shape a mechanical engineer’s path

“I observed assistive technologies — developed by scientists and engineers my friends and I never met — which liberated us. My dream has always been to be one of those engineers.” Hermus says. Credit: Tony Pulsone

By Michaela Jarvis | Department of Mechanical Engineering

Before James Hermus started elementary school, he was a happy, curious kid who loved to learn. By the end of first grade, however, all that started to change, he says. As his schoolbooks became more advanced, Hermus could no longer memorize the words on each page, and pretend to be reading. He clearly knew the material the teacher presented in class; his teachers could not understand why he was unable to read and write his assignments. He was accused of being lazy and not trying hard enough.

Hermus was fortunate to have parents who sought out neuropsychology testing — which documented an enormous discrepancy between his native intelligence and his symbol decoding and phonemic awareness. Yet despite receiving a diagnosis of dyslexia, Hermus and his family encountered resistance at his school. According to Hermus, the school’s reading specialist did not “believe” in dyslexia, and, he says, the principal threatened his family with truancy charges when they took him out of school each day to attend tutoring.

Hermus’ school, like many across the country, was reluctant to provide accommodations for students with learning disabilities who were not two years behind in two subjects, Hermus says. For this reason, obtaining and maintaining accommodations, such as extended time and a reader, was a constant battle from first through 12th grade: Students who performed well lost their right to accommodations. Only through persistence and parental support did Hermus succeed in an educational system which he says all too often fails students with learning disabilities.

By the time Hermus was in high school, he had to become a strong self-advocate. In order to access advanced courses, he needed to be able to read more and faster, so he sought out adaptive technology — Kurzweil, a text-to-audio program. This, he says, was truly life-changing. At first, to use this program he had to disassemble textbooks, feed the pages through a scanner, and digitize them.

After working his way to the University of Wisconsin at Madison, Hermus found a research opportunity in medical physics and then later in biomechanics. Interestingly, the steep challenges that Hermus faced during his education had developed in him “the exact skill set that makes a successful researcher,” he says. “I had to be organized, advocate for myself, seek out help to solve problems that others had not seen before, and be excessively persistent.”

While working as a member of Professor Darryl Thelen’s Neuromuscular Biomechanics Lab at Madison, Hermus helped design and test a sensor for measuring tendon stress. He recognized his strengths in mechanical design. During this undergraduate research, he co-authored numerous journal and conference papers. These experiences and a desire to help people with physical disabilities propelled him to MIT.

“MIT is an incredible place. The people in MechE at MIT are extremely passionate and unassuming. I am not unusual at MIT,” Hermus says. Credit: Tony Pulsone

In September 2022, Hermus completed his PhD in mechanical engineering from MIT. He has been an author on seven papers in peer-reviewed journals, three as first author and four of them published when he was an undergraduate. He has won awards for his academics and for his mechanical engineering research and has served as a mentor and an advocate for disability awareness in several different contexts.

His work as a researcher stems directly from his personal experience, Hermus says. As a student in a special education classroom, “I observed assistive technologies — developed by scientists and engineers my friends and I never met — which liberated us. My dream has always been to be one of those engineers.”

Hermus’ work aims to investigate and model human interaction with objects where both substantial motion and force are present. His research has demonstrated that the way humans perform such everyday actions as turning a steering wheel or opening a door is very different from much of robotics. He showed specific patterns exist in the behavior that provide insight into neural control. In 2020, Hermus was the first author on a paper on this topic, which was published in the Journal of Neurophysiology and later won first place in the MIT Mechanical Engineering Research Exhibition. Using this insight, Hermus and his colleagues implemented these strategies on a Kuka LBR iiwa robot to learn about how humans regulate their many degrees of freedom. This work was published in IEEE Transactions on Robotics 2022. More recently, Hermus has collaborated with researchers at the University of Pittsburgh to see if these ideas prove useful in the development of brain computer interfaces — using electrodes implanted in the brain to control a prosthetic robotic arm.

While the hardware of prosthetics and exoskeletons is advancing, Hermus says, there are daunting limitations to the field in the descriptive modeling of human physical behavior, especially during contact with objects. Without these descriptive models, developing generalizable implementations of prosthetics, exoskeletons, and rehabilitation robotics will prove challenging.

“We need competent descriptive models of human physical interaction,” he says.

While earning his master’s and doctoral degrees at MIT, Hermus worked with Neville Hogan, the Sun Jae Professor of Mechanical Engineering, in the Eric P. and Evelyn E. Newman Laboratory for Biomechanics and Human Rehabilitation. Hogan has high praise for the research Hermus has conducted over his six years in the Newman lab.

“James has done superb work for both his master’s and doctoral theses. He tackled a challenging problem and made excellent and timely progress towards its solution. He was a key member of my research group,” Hogan says. “James’ commitment to his research is unquestionably a reflection of his own experience.”

Following postdoctoral research at MIT, where he has also been a part-time lecturer, Hermus is now beginning postdoctoral work with Professor Aude Billard at EPFL in Switzerland, where he hopes to gain experience with learning and optimization methods to further his human motor control research.

Hermus’ enthusiasm for his research is palpable, and his zest for learning and life shines through despite the hurdles his dyslexia presented. He demonstrates a similar kind of excitement for ski-touring and rock-climbing with the MIT Outing Club, working at MakerWorkshop, and being a member of the MechE community.

“MIT is an incredible place. The people in MechE at MIT are extremely passionate and unassuming. I am not unusual at MIT,” he says. “Nearly every person I know well has a unique story with an unconventional path.”

Engineers devise a modular system to produce efficient, scalable aquabots

Researchers have come up with an innovative approach to building deformable underwater robots using simple repeating substructures. The team has demonstrated the new system in two different example configurations, one like an eel, pictured here in the MIT tow tank. Credit: Courtesy of the researchers

By David L. Chandler | MIT News Office

Underwater structures that can change their shapes dynamically, the way fish do, push through water much more efficiently than conventional rigid hulls. But constructing deformable devices that can change the curve of their body shapes while maintaining a smooth profile is a long and difficult process. MIT’s RoboTuna, for example, was composed of about 3,000 different parts and took about two years to design and build.

Now, researchers at MIT and their colleagues — including one from the original RoboTuna team — have come up with an innovative approach to building deformable underwater robots, using simple repeating substructures instead of unique components. The team has demonstrated the new system in two different example configurations, one like an eel and the other a wing-like hydrofoil. The principle itself, however, allows for virtually unlimited variations in form and scale, the researchers say.

The work is being reported in the journal Soft Robotics, in a paper by MIT research assistant Alfonso Parra Rubio, professors Michael Triantafyllou and Neil Gershenfeld, and six others.

Existing approaches to soft robotics for marine applications are generally made on small scales, while many useful real-world applications require devices on scales of meters. The new modular system the researchers propose could easily be extended to such sizes and beyond, without requiring the kind of retooling and redesign that would be needed to scale up current systems.

The deformable robots are made with lattice-like pieces, called voxels, that are low density and have high stiffness. The deformable robots are made with lattice-like pieces, called voxels, that are low density and have high stiffness. Credit: Courtesy of the researchers

“Scalability is a strong point for us,” says Parra Rubio. Given the low density and high stiffness of the lattice-like pieces, called voxels, that make up their system, he says, “we have more room to keep scaling up,” whereas most currently used technologies “rely on high-density materials facing drastic problems” in moving to larger sizes.

The individual voxels in the team’s experimental, proof-of-concept devices are mostly hollow structures made up of cast plastic pieces with narrow struts in complex shapes. The box-like shapes are load-bearing in one direction but soft in others, an unusual combination achieved by blending stiff and flexible components in different proportions.

“Treating soft versus hard robotics is a false dichotomy,” Parra Rubio says. “This is something in between, a new way to construct things.” Gershenfeld, head of MIT’s Center for Bits and Atoms, adds that “this is a third way that marries the best elements of both.”

“Smooth flexibility of the body surface allows us to implement flow control that can reduce drag and improve propulsive efficiency, resulting in substantial fuel saving,” says Triantafyllou, who is the Henry L. and Grace Doherty Professor in Ocean Science and Engineering, and was part of the RoboTuna team.


Credit: Courtesy of the researchers.

In one of the devices produced by the team, the voxels are attached end-to-end in a long row to form a meter-long, snake-like structure. The body is made up of four segments, each consisting of five voxels, with an actuator in the center that can pull a wire attached to each of the two voxels on either side, contracting them and causing the structure to bend. The whole structure of 20 units is then covered with a rib-like supporting structure, and then a tight-fitting waterproof neoprene skin. The researchers deployed the structure in an MIT tow tank to show its efficiency in the water, and demonstrated that it was indeed capable of generating forward thrust sufficient to propel itself forward using undulating motions.

“There have been many snake-like robots before,” Gershenfeld says. “But they’re generally made of bespoke components, as opposed to these simple building blocks that are scalable.”

For example, Parra Rubio says, a snake-like robot built by NASA was made up of thousands of unique pieces, whereas for this group’s snake, “we show that there are some 60 pieces.” And compared to the two years spent designing and building the MIT RoboTuna, this device was assembled in about two days, he says.

The individual voxels are mostly hollow structures made up of cast plastic pieces with narrow struts in complex shapes. Credit: Courtesy of the researchers

The other device they demonstrated is a wing-like shape, or hydrofoil, made up of an array of the same voxels but able to change its profile shape and therefore control the lift-to-drag ratio and other properties of the wing. Such wing-like shapes could be used for a variety of purposes, ranging from generating power from waves to helping to improve the efficiency of ship hulls — a pressing demand, as shipping is a significant source of carbon emissions.

The wing shape, unlike the snake, is covered in an array of scale-like overlapping tiles, designed to press down on each other to maintain a waterproof seal even as the wing changes its curvature. One possible application might be in some kind of addition to a ship’s hull profile that could reduce the formation of drag-inducing eddies and thus improve its overall efficiency, a possibility that the team is exploring with collaborators in the shipping industry.

The team also created a wing-like hydrofoil. Credit: Courtesy of the researchers

Ultimately, the concept might be applied to a whale-like submersible craft, using its morphable body shape to create propulsion. Such a craft that could evade bad weather by staying below the surface, but without the noise and turbulence of conventional propulsion. The concept could also be applied to parts of other vessels, such as racing yachts, where having a keel or a rudder that could curve gently during a turn instead of remaining straight could provide an extra edge. “Instead of being rigid or just having a flap, if you can actually curve the way fish do, you can morph your way around the turn much more efficiently,” Gershenfeld says.


The research team included Dixia Fan of the Westlake University in China; Benjamin Jenett SM ’15, PhD ’ 20 of Discrete Lattice Industries; Jose del Aguila Ferrandis, Amira Abdel-Rahman and David Preiss of MIT; and Filippos Tourlomousis of the Demokritos Research Center of Greece. The work was supported by the U.S. Army Research Lab, CBA Consortia funding, and the MIT Sea Grant Program.

Sensing with purpose

Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and the Media Lab, seeks to develop wireless technology that can sense the physical world in ways that were not possible before. Image: Adam Glanzman

By Adam Zewe | MIT News Office

Fadel Adib never expected that science would get him into the White House, but in August 2015 the MIT graduate student found himself demonstrating his research to the president of the United States.

Adib, fellow grad student Zachary Kabelac, and their advisor, Dina Katabi, showcased a wireless device that uses Wi-Fi signals to track an individual’s movements.

As President Barack Obama looked on, Adib walked back and forth across the floor of the Oval Office, collapsed onto the carpet to demonstrate the device’s ability to monitor falls, and then sat still so Katabi could explain to the president how the device was measuring his breathing and heart rate.

“Zach started laughing because he could see that my heart rate was 110 as I was demoing the device to the president. I was stressed about it, but it was so exciting. I had poured a lot of blood, sweat, and tears into that project,” Adib recalls.

For Adib, the White House demo was an unexpected — and unforgettable — culmination of a research project he had launched four years earlier when he began his graduate training at MIT. Now, as a newly tenured associate professor in the Department of Electrical Engineering and Computer Science and the Media Lab, he keeps building off that work. Adib, the Doherty Chair of Ocean Utilization, seeks to develop wireless technology that can sense the physical world in ways that were not possible before.

In his Signal Kinetics group, Adib and his students apply knowledge and creativity to global problems like climate change and access to health care. They are using wireless devices for contactless physiological sensing, such as measuring someone’s stress level using Wi-Fi signals. The team is also developing battery-free underwater cameras that could explore uncharted regions of the oceans, tracking pollution and the effects of climate change. And they are combining computer vision and radio frequency identification (RFID) technology to build robots that find hidden items, to streamline factory and warehouse operations and, ultimately, alleviate supply chain bottlenecks.

While these areas may seem quite different, each time they launch a new project, the researchers uncover common threads that tie the disciplines together, Adib says.

“When we operate in a new field, we get to learn. Every time you are at a new boundary, in a sense you are also like a kid, trying to understand these different languages, bring them together, and invent something,” he says.

A science-minded child

A love of learning has driven Adib since he was a young child growing up in Tripoli on the coast of Lebanon. He had been interested in math and science for as long as he could remember, and had boundless energy and insatiable curiosity as a child.

“When my mother wanted me to slow down, she would give me a puzzle to solve,” he recalls.

By the time Adib started college at the American University of Beirut, he knew he wanted to study computer engineering and had his sights set on MIT for graduate school.

Seeking to kick-start his future studies, Adib reached out to several MIT faculty members to ask about summer internships. He received a response from the first person he contacted. Katabi, the Thuan and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science (EECS), and a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Jameel Clinic, interviewed him and accepted him for a position. He immersed himself in the lab work and, as the end of summer approached, Katabi encouraged him to apply for grad school at MIT and join her lab.

“To me, that was a shock because I felt this imposter syndrome. I thought I was moving like a turtle with my research, but I did not realize that with research itself, because you are at the boundary of human knowledge, you are expected to progress iteratively and slowly,” he says.

As an MIT grad student, he began contributing to a number of projects. But his passion for invention pushed him to embark into unexplored territory. Adib had an idea: Could he use Wi-Fi to see through walls?

“It was a crazy idea at the time, but my advisor let me work on it, even though it was not something the group had been working on at all before. We both thought it was an exciting idea,” he says.

As Wi-Fi signals travel in space, a small part of the signal passes through walls — the same way light passes through windows — and is then reflected by whatever is on the other side. Adib wanted to use these signals to “see” what people on the other side of a wall were doing.

Discovering new applications

There were a lot of ups and downs (“I’d say many more downs than ups at the beginning”), but Adib made progress. First, he and his teammates were able to detect people on the other side of a wall, then they could determine their exact location. Almost by accident, he discovered that the device could be used to monitor someone’s breathing.

“I remember we were nearing a deadline and my friend Zach and I were working on the device, using it to track people on the other side of the wall. I asked him to hold still, and then I started to see him appearing and disappearing over and over again. I thought, could this be his breathing?” Adib says.

Eventually, they enabled their Wi-Fi device to monitor heart rate and other vital signs. The technology was spun out into a startup, which presented Adib with a conundrum once he finished his PhD — whether to join the startup or pursue a career in academia.

He decided to become a professor because he wanted to dig deeper into the realm of invention. But after living through the winter of 2014-2015, when nearly 109 inches of snow fell on Boston (a record), Adib was ready for a change of scenery and a warmer climate. He applied to universities all over the United States, and while he had some tempting offers, Adib ultimately realized he didn’t want to leave MIT. He joined the MIT faculty as an assistant professor in 2016 and was named associate professor in 2020.

“When I first came here as an intern, even though I was thousands of miles from Lebanon, I felt at home. And the reason for that was the people. This geekiness — this embrace of intellect — that is something I find to be beautiful about MIT,” he says.

He’s thrilled to work with brilliant people who are also passionate about problem-solving. The members of his research group are diverse, and they each bring unique perspectives to the table, which Adib says is vital to encourage the intellectual back-and-forth that drives their work.

Diving into a new project

For Adib, research is exploration. Take his work on oceans, for instance. He wanted to make an impact on climate change, and after exploring the problem, he and his students decided to build a battery-free underwater camera.

Adib learned that the ocean, which covers 70 percent of the planet, plays the single largest role in the Earth’s climate system. Yet more than 95 percent of it remains unexplored. That seemed like a problem the Signal Kinetics group could help solve, he says.

But diving into this research area was no easy task. Adib studies Wi-Fi systems, but Wi-Fi does not work underwater. And it is difficult to recharge a battery once it is deployed in the ocean, making it hard to build an autonomous underwater robot that can do large-scale sensing.

So, the team borrowed from other disciplines, building an underwater camera that uses acoustics to power its equipment and capture and transmit images.

“We had to use piezoelectric materials, which come from materials science, to develop transducers, which come from oceanography, and then on top of that we had to marry these things with technology from RF known as backscatter,” he says. “The biggest challenge becomes getting these things to gel together. How do you decode these languages across fields?”

It’s a challenge that continues to motivate Adib as he and his students tackle problems that are too big for one discipline.

He’s excited by the possibility of using his undersea wireless imaging technology to explore distant planets. These same tools could also enhance aquaculture, which could help eradicate food insecurity, or support other emerging industries.

To Adib, the possibilities seem endless.

“With each project, we discover something new, and that opens up a whole new world to explore. The biggest driver of our work in the future will be what we think is impossible, but that we could make possible,” he says.

Featured video: Creating a sense of feeling

Shriya Srinivasan as a dancer and researcher | Snapshots taken from ‘Connecting the human body to the outside world’ video on YouTube

“The human body is just engineered so beautifully,” says Shriya Srinivasan PhD ’20, a research affiliate at MIT’s Koch Institute for Integrative Cancer Research, a junior fellow at the Society of Fellows at Harvard University, and former doctoral student in the Harvard-MIT Program in Health Sciences and Technology.

Both a biomedical engineer and a dancer, Srinivasan is dedicated to investigating the body’s movements and sensations. As a PhD student she worked in Media Lab Professor Hugh Herr’s Biomechatronics Group on a system that helps patients with amputation feel what their prostheses are feeling and send feedback from the device to the body. She has also studied the south Indian classical dance form Bharathanatyam for 22 years and co-directs the Anubhava Dance Company.

“The kind of relief and sense of fulfillment I get from the arts is very different from what I get from research and science,” she says. “I find that research often nourishes my intellectual curiosity, and the arts are helping to build that emotional and spiritual growth. But in both worlds, I’m thinking about how we create a sense of feeling, how we control emotion and your physiological response. That’s really beautiful to me.”

Video by: Jason Kimball/MIT News | 5 minutes 34 seconds.

Looking beyond “technology for technology’s sake”

“Learning about the social implications of the technology you’re working on is really important,” says senior Austen Roberson. Photo: Jodi Hilton

By Laura Rosado | MIT News correspondent

Austen Roberson’s favorite class at MIT is 2.S007 (Design and Manufacturing I-Autonomous Machines), in which students design, build, and program a fully autonomous robot to accomplish tasks laid out on a themed game board.

“The best thing about that class is everyone had a different idea,” says Roberson. “We all had the same game board and the same instructions given to us, but the robots that came out of people’s minds were so different.”

The game board was Mars-themed, with a model shuttle that could be lifted to score points. Roberson’s robot, nicknamed Tank Evans after a character from the movie “Surf’s Up,” employed a clever strategy to accomplish this task. Instead of spinning the gears that would raise the entire mechanism, Roberson realized a claw gripper could wrap around the outside of the shuttle and lift it manually.

“That wasn’t the intended way,” says Roberson, but his outside-of-the-box strategy ending up winning him the competition at the conclusion of the class, which was part of the New Engineering Education Transformation (NEET) program. “It was a really great class for me. I get a lot of gratification out of building something with my hands and then using my programming and problem-solving skills to make it move.”

Roberson, a senior, is majoring in aerospace engineering with a minor in computer science. As his winning robot demonstrates, he thrives at the intersection of both fields. He references the Mars Curiosity Rover as the type of project that inspires him; he even keeps a Lego model of Curiosity on his desk. 

“You really have to trust that the hardware you’ve made is up to the task, but you also have to trust your software equally as much,” says Roberson, referring to the challenges of operating a rover from millions of miles away. “Is the robot going to continue to function after we’ve put it into space? Both of those things have to come together in such a perfect way to make this stuff work.”

Outside of formal classwork, Roberson has pursued multiple research opportunities at MIT that blend his academic interests. He’s worked on satellite situational awareness with the Space Systems Laboratory, tested drone flight in different environments with the Aerospace Controls Laboratory, and is currently working on zero-shot machine learning for anomaly detection in big datasets with the Mechatronics Research Laboratory.

“Whether that be space exploration or something else, all I can hope for is that I’m making an impact, and that I’m making a difference in people’s lives,” says Roberson. Photo: Jodi Hilton

Even while tackling these challenging technical problems head-on, Roberson is also actively thinking about the social impact of his work. He takes classes in the Program on Science, Technology, and Society, which has taught him not only how societal change throughout history has been driven by technological advancements, but also how to be a thoughtful engineer in his own career.

“Learning about the social implications of the technology you’re working on is really important,” says Roberson, acknowledging that his work in automation and machine learning needs to address these questions. “Sometimes, we get caught up in technology for technology’s sake. How can we take these same concepts and bring them to people to help in a tangible, physical way? How have we come together as a scientific community to really affect social change, and what can we do in the future to continue affecting that social change?”

Roberson is already working through what these questions mean for him personally. He’s been a member of the National Society of Black Engineers (NSBE) throughout his entire college experience, which includes serving on the executive board for two years. He’s helped to organize workshops focused on everything from interview preparation to financial literacy, as well as social events to build community among members.

“The mission of the organization is to increase the number of culturally responsible Black engineers that excel academically, succeed professionally, and positively impact the community,” says Roberson. “My goal with NSBE was to be able to provide a resource to help everybody get to where they wanted to be, to be the vehicle to really push people to be their best, and to provide the resources that people needed and wanted to advance themselves professionally.”

In fact, one of his most memorable MIT experiences is the first conference he attended as a member of NSBE.

“Being able to see all different these people from all of these different schools able to come together as a family and just talk to each other, it’s a very rewarding experience,” Roberson says. “It’s important to be able to surround yourself with people who have similar professional goals and share similar backgrounds and experiences with you. It’s definitely the proudest I’ve been of any club at MIT.”

Looking toward his own career, Roberson wants to find a way to work on fast-paced, cutting-edge technologies that move society forward in a positive way.

“Whether that be space exploration or something else, all I can hope for is that I’m making an impact, and that I’m making a difference in people’s lives,” says Roberson. “I think learning about space is learning about ourselves as well. The more you can learn about the stuff that’s out there, you can take those lessons to reflect on what’s down here as well.”

Page 2 of 12
1 2 3 4 12