All posts by Wyss Institute

Page 1 of 3
1 2 3

Sensing Parkinson’s symptoms

MyoExo integrates a series of sensors into a wearable device capable of detecting slight changes in muscle strain and bulging, enabling it to measure and track the symptoms of Parkinson’s disease. Credit: Oluwaseun Araromi

By Matthew Goisman/SEAS Communications

Nearly one million people in the United States live with Parkinson’s disease. The degenerative condition affects the neurons in the brain that produce the neurotransmitter dopamine, which can impact motor function in multiple ways, including muscle tremors, limb rigidity and difficulty walking.

There is currently no cure for Parkinson’s disease, and current treatments are limited by a lack of quantitative data about the progress of the disease.

MyoExo, a translation-focused research project based on technology developed at the Harvard John A. Paulson School of Engineering (SEAS) and the Wyss Institute for Biologically Inspired Engineering, aims to provide that data. The team is refining the technology and starting to develop a business plan as part of the Harvard Innovation Lab’s venture program. The MyoExo wearable device aims to not only provide a remote monitoring device for patients at-home setting but also be sensitive enough to aid early diagnostics of Parkinson’s disease.

“This is a disease that’s affecting a lot of people and it seems like the main therapeutics that tackle this have not changed significantly in the past several decades,” said Oluwaseun Araromi, Research Associate in Materials Science and Mechanical Engineering at SEAS and the Wyss Institute.

The MyoExo technology consists of a series of wearable sensors, each one capable of detecting slight changes in muscle strain and bulging. When integrated into a wearable device, the data can provide what Araromi described as “muscle-centric physiological signatures.”

“The enabling technology underlying this is a sensor that detects small changes in the shape of an object,” he said. “Parkinson’s disease, especially in its later stages, really expresses itself as a movement disorder, so sensors that can detect shape changes can also detect changes in the shape of muscle as people move.”

MyoExo emerged from research done in the Harvard Biodesign Lab of Conor Walsh, the Paul A. Maeder Professor of Engineering and Applied Sciences, and the Microrobotics Lab of Rob Wood, the Charles River Professor of Engineering and Applied Sciences at SEAS. Araromi, Walsh and Wood co-authored a paper on their research into resilient wearable sensors in November 2020, around the same time the team began to focus on medical applications of the technology.

“If we had these hypersensitive sensors in something that a person was wearing, we could detect how their muscles were bulging,” Walsh said. “That was more application-agnostic. We didn’t know exactly where that would be the most important, and I credit Seun and our Wyss collaborators for being the ones to think about identifying Parkinson’s applications.”

Araromi sees the MyoExo technology as having value for three major stakeholders: the pharmaceutical industry, clinicians and physicians, and patients. Pharmaceutical companies could use data from the wearable system to quantify their medications’ effect on Parkinson’s symptoms, while clinicians could determine if one treatment regimen is more effective than another for a specific patient. Patients could use the system to track their own treatment, whether that’s medication, physical therapy, or both.

“Some patients are very incentivized to track their progress,” Araromi said. “They want to know that if they were really good last week and did all of the exercises that they were prescribed, their wearable device would tell them their symptomatology has reduced by 5% compared to the week before. We envision that as something that would really encourage individuals to keep and adhere to their treatment regiments.”

MyoExo’s sensor technology is based on research conducted in the Harvard Biodesign Lab of Conor Walsh and the Microrobotics Lab of Rob Wood at SEAS, and further developed through the Wyss Institute for Biologically Inspired Engineering and Harvard Innovation Labs venture program. Credit: Oluwaseun Araromi

Araromi joined SEAS and the Wyss Institute as a postdoctoral researcher in 2016, having earned a Ph.D in mechanical engineering from the University of Bristol in England and completed a postdoc at the Swiss Federal Institute of Technology Lausanne.

His interest in sensor technology made him a great fit for research spanning the Biodesign and Microrobotics labs, and his early work included helping develop an exosuit to aid with walking.

“I was initially impressed with Seun’s strong background in materials, transduction and physics,” Walsh said. “He really understood how you’d think about creating novel sensors with soft materials. Seun’s really the translation champion for the project in terms of driving forward the technology, but at the same time trying to think about the need in the market, and how we demonstrate that we can meet that.”

The technology is currently in the human testing phase to demonstrate proof of concept detection of clinically-relevant metrics with support from the Wyss Institute Validation Project program. Araromi wants to show that the wearable device can quantify the difference between the muscle movements of someone with Parkinson’s and someone without. From there, the goal is to demonstrate that the device can quantify whether a person has early- or late-stage symptoms of the disease, as well as their response to treatment.

“We are evaluating our technology and validating our technical approach, making sure that as it’s currently constructed, even in this crude form, we can get consistent data and results,” Araromi said. “We’re doing this in a small pilot phase, such that if there are issues, we can fix those issues, and then expand to a larger population where we would test our device on more individuals with Parkinson’s disease. That should really convince ourselves and hopefully the community that we are able to reach a few key technical milestones, and then garner more interest and potentially investment and partnership.”

Team builds first living robots that can reproduce

AI-designed (C-shaped) organisms push loose stem cells (white) into piles as they move through their environment. Credit: Douglas Blackiston and Sam Kriegman

By Joshua Brown, University of Vermont Communications

To persist, life must reproduce. Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses.

Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction—and applied their discovery to create the first-ever, self-replicating living robots.

The same team that built the first living robots (“Xenobots,” assembled from frog cells—reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble “baby” Xenobots inside their Pac-Man-shaped “mouth”—that, a few days later, become new Xenobots that look and move just like themselves.

And then these new Xenobots can go out, find cells, and build copies of themselves. Again and again.

“With the right design—they will spontaneously self-replicate,” says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research.

The results of the new research were published in the Proceedings of the National Academy of Sciences.

Into the Unknown

In a Xenopus laevis frog, these embryonic cells would develop into skin. “They would be sitting on the outside of a tadpole, keeping out pathogens and redistributing mucus,” says Michael Levin, Ph.D., a professor of biology and director of the Allen Discovery Center at Tufts University and co-leader of the new research. “But we’re putting them into a novel context. We’re giving them a chance to reimagine their multicellularity.” Levin is also an Associate Faculty member at the Wyss Institute.

As Pac-man-shaped Xenobot “parents” move around their environment, they collect loose stem cells in their “mouths” that, over time, aggregate to create “offspring” Xenobots that develop to look just like their creators. Credit: Doug Blackiston and Sam Kriegman

And what they imagine is something far different than skin. “People have thought for quite a long time that we’ve worked out all the ways that life can reproduce or replicate. But this is something that’s never been observed before,” says co-author Douglas Blackiston, Ph.D., the senior scientist at Tufts University and the Wyss Institute who assembled the Xenobot “parents” and developed the biological portion of the new study.

“This is profound,” says Levin. “These cells have the genome of a frog, but, freed from becoming tadpoles, they use their collective intelligence, a plasticity, to do something astounding.” In earlier experiments, the scientists were amazed that Xenobots could be designed to achieve simple tasks. Now they are stunned that these biological objects—a computer-designed collection of cells—will spontaneously replicate. “We have the full, unaltered frog genome,” says Levin, “but it gave no hint that these cells can work together on this new task,” of gathering and then compressing separated cells into working self-copies.

“These are frog cells replicating in a way that is very different from how frogs do it. No animal or plant known to science replicates in this way,” says Sam Kriegman, Ph.D., the lead author on the new study, who completed his Ph.D. in Bongard’s lab at UVM and is now a post-doctoral researcher at Tuft’s Allen Center and Harvard University’s Wyss Institute for Biologically Inspired Engineering.

On its own, the Xenobot parent, made of some 3,000 cells, forms a sphere. “These can make children but then the system normally dies out after that. It’s very hard, actually, to get the system to keep reproducing,” says Kriegman. But with an artificial intelligence program working on the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core, an evolutionary algorithm was able to test billions of body shapes in simulation—triangles, squares, pyramids, starfish—to find ones that allowed the cells to be more effective at the motion-based “kinematic” replication reported in the new research.

“We asked the supercomputer at UVM to figure out how to adjust the shape of the initial parents, and the AI came up with some strange designs after months of chugging away, including one that resembled Pac-Man,” says Kriegman. “It’s very non-intuitive. It looks very simple, but it’s not something a human engineer would come up with. Why one tiny mouth? Why not five? We sent the results to Doug and he built these Pac-Man-shaped parent Xenobots. Then those parents built children, who built grandchildren, who built great-grandchildren, who built great-great-grandchildren.” In other words, the right design greatly extended the number of generations.

Kinematic replication is well-known at the level of molecules—but it has never been observed before at the scale of whole cells or organisms.

An AI-designed “parent” organism (C shape; red) beside stem cells that have been compressed into a ball (“offspring”; green). Credit: Douglas Blackiston and Sam Kriegman

“We’ve discovered that there is this previously unknown space within organisms, or living systems, and it’s a vast space,” says Bongard. “How do we then go about exploring that space? We found Xenobots that walk. We found Xenobots that swim. And now, in this study, we’ve found Xenobots that kinematically replicate. What else is out there?”

Or, as the scientists write in the Proceedings of the National Academy of Sciences study: “life harbors surprising behaviors just below the surface, waiting to be uncovered.”

Responding to Risk

Some people may find this exhilarating. Others may react with concern, or even terror, to the notion of a self-replicating biotechnology. For the team of scientists, the goal is deeper understanding.

“We are working to understand this property: replication. The world and technologies are rapidly changing. It’s important, for society as a whole, that we study and understand how this works,” says Bongard. These millimeter-sized living machines, entirely contained in a laboratory, easily extinguished, and vetted by federal, state and institutional ethics experts, “are not what keep me awake at night. What presents risk is the next pandemic; accelerating ecosystem damage from pollution; intensifying threats from climate change,” says UVM’s Bongard. “This is an ideal system in which to study self-replicating systems. We have a moral imperative to understand the conditions under which we can control it, direct it, douse it, exaggerate it.”

Bongard points to the COVID epidemic and the hunt for a vaccine. “The speed at which we can produce solutions matters deeply. If we can develop technologies, learning from Xenobots, where we can quickly tell the AI: ‘We need a biological tool that does X and Y and suppresses Z,’ —that could be very beneficial. Today, that takes an exceedingly long time.” The team aims to accelerate how quickly people can go from identifying a problem to generating solutions—”like deploying living machines to pull microplastics out of waterways or build new medicines,” Bongard says.

“We need to create technological solutions that grow at the same rate as the challenges we face,” Bongard says.

And the team sees promise in the research for advancements toward regenerative medicine. “If we knew how to tell collections of cells to do what we wanted them to do, ultimately, that’s regenerative medicine—that’s the solution to traumatic injury, birth defects, cancer, and aging,” says Levin. “All of these different problems are here because we don’t know how to predict and control what groups of cells are going to build. Xenobots are a new platform for teaching us.”

The scientists behind the Xenobots participated in a live panel discussion on December 1, 2021 to discuss the latest developments in their research. Credit: Wyss Institute at Harvard University

Face masks that can diagnose COVID-19

By Lindsay Brownell

Most people associate the term “wearable” with a fitness tracker, smartwatch, or wireless earbuds. But what if cutting-edge biotechnology were integrated into your clothing, and could warn you when you were exposed to something dangerous?

A team of researchers from the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Massachusetts Institute of Technology has found a way to embed synthetic biology reactions into fabrics, creating wearable biosensors that can be customized to detect pathogens and toxins and alert the wearer.

The team has integrated this technology into standard face masks to detect the presence of the SARS-CoV-2 virus in a patient’s breath. The button-activated mask gives results within 90 minutes at levels of accuracy comparable to standard nucleic acid-based diagnostic tests like polymerase chain reactions (PCR). The achievement is reported in Nature Biotechnology.

The wFDCF face mask can be integrated into any standard face mask. The wearer pushes a button on the mask that releases a small amount of water into the system, which provides results within 90 minutes. Credit: Wyss Institute at Harvard University

“We have essentially shrunk an entire diagnostic laboratory down into a small, synthetic biology-based sensor that works with any face mask, and combines the high accuracy of PCR tests with the speed and low cost of antigen tests,” said co-first author Peter Nguyen, Ph.D., a Research Scientist at the Wyss Institute. “In addition to face masks, our programmable biosensors can be integrated into other garments to provide on-the-go detection of dangerous substances including viruses, bacteria, toxins, and chemical agents.”

Taking cells out of the equation

The SARS-CoV-2 biosensor is the culmination of three years of work on what the team calls their wearable freeze-dried cell-free (wFDCF) technology, which is built upon earlier iterations created in the lab of Wyss Core Faculty member and senior author Jim Collins. The technique involves extracting and freeze-drying the molecular machinery that cells use to read DNA and produce RNA and proteins. These biological elements are shelf-stable for long periods of time and activating them is simple: just add water. Synthetic genetic circuits can be added to create biosensors that can produce a detectable signal in response of the presence of a target molecule.

The researchers first applied this technology to diagnostics by integrating it into a tool to address the Zika virus outbreak in 2015. They created biosensors that can detect pathogen-derived RNA molecules and coupled them with a colored or fluorescent indicator protein, then embedded the genetic circuit into paper to create a cheap, accurate, portable diagnostic. Following their success embedding their biosensors into paper, they next set their sights on making them wearable.

These flexible, wearable biosensors can be integrated into fabric to create clothing that can detect pathogens and environmental toxins and alert the wearer via a companion smartphone app. Credit: Wyss Institute at Harvard University

“Other groups have created wearables that can sense biomolecules, but those techniques have all required putting living cells into the wearable itself, as if the user were wearing a tiny aquarium. If that aquarium ever broke, then the engineered bugs could leak out onto the wearer, and nobody likes that idea,” said Nguyen. He and his teammates started investigating whether their wFDCF technology could solve this problem, methodically testing it in more than 100 different kinds of fabrics.

Then, the COVID-19 pandemic struck.

Pivoting from wearables to face masks

“We wanted to contribute to the global effort to fight the virus, and we came up with the idea of integrating wFDCF into face masks to detect SARS-CoV-2. The entire project was done under quarantine or strict social distancing starting in May 2020. We worked hard, sometimes bringing non-biological equipment home and assembling devices manually. It was definitely different from the usual lab infrastructure we’re used to working under, but everything we did has helped us ensure that the sensors would work in real-world pandemic conditions,” said co-first author Luis Soenksen, Ph.D., a Postdoctoral Fellow at the Wyss Institute.

The team called upon every resource they had available to them at the Wyss Institute to create their COVID-19-detecting face masks, including toehold switches developed in Core Faculty member Peng Yin’s lab and SHERLOCK sensors developed in the Collins lab. The final product consists of three different freeze-dried biological reactions that are sequentially activated by the release of water from a reservoir via the single push of a button.

The first reaction cuts open the SARS-CoV-2 virus’ membrane to expose its RNA. The second reaction is an amplification step that makes numerous double-stranded copies of the Spike-coding gene from the viral RNA. The final reaction uses CRISPR-based SHERLOCK technology to detect any Spike gene fragments, and in response cut a probe molecule into two smaller pieces that are then reported via a lateral flow assay strip. Whether or not there are any Spike fragments available to cut depends on whether the patient has SARS-CoV-2 in their breath. This difference is reflected in changes in a simple pattern of lines that appears on the readout portion of the device, similar to an at-home pregnancy test.

When SARS-CoV-2 particles are present, the wFDCF system cuts a molecular bond that changes the pattern of lines that form in the readout strip, similar to an at-home pregnancy test. Credit: Wyss Institute at Harvard University

The wFDCF face mask is the first SARS-CoV-2 nucleic acid test that achieves high accuracy rates comparable to current gold standard RT-PCR tests while operating fully at room temperature, eliminating the need for heating or cooling instruments and allowing the rapid screening of patient samples outside of labs.

“This work shows that our freeze-dried, cell-free synthetic biology technology can be extended to wearables and harnessed for novel diagnostic applications, including the development of a face mask diagnostic. I am particularly proud of how our team came together during the pandemic to create deployable solutions for addressing some of the world’s testing challenges,” said Collins, Ph.D., who is also the Termeer Professor of Medical Engineering & Science at MIT.

Beyond the COVID-19 pandemic

The Wyss Institute’s wearable freeze-dried cell-free (wFDCF) technology can quickly diagnose COVID-19 from virus in patients’ breath, and can also be integrated into clothing to detect a wide variety of pathogens and other dangerous substances. Credit: Wyss Institute at Harvard University

The face mask diagnostic is in some ways the icing on the cake for the team, which had to overcome numerous challenges in order to make their technology truly wearable, including capturing droplets of a liquid substance within a flexible, unobtrusive device and preventing evaporation. The face mask diagnostic omits electronic components in favor of ease of manufacturing and low cost, but integrating more permanent elements into the system opens up a wide range of other possible applications.

In their paper, the researchers demonstrate that a network of fiber optic cables can be integrated into their wFCDF technology to detect fluorescent light generated by the biological reactions, indicating detection of the target molecule with a high level of accuracy. This digital signal can be sent to a smartphone app that allows the wearer to monitor their exposure to a vast array of substances.

“This technology could be incorporated into lab coats for scientists working with hazardous materials or pathogens, scrubs for doctors and nurses, or the uniforms of first responders and military personnel who could be exposed to dangerous pathogens or toxins, such as nerve gas,” said co-author Nina Donghia, a Staff Scientist at the Wyss Institute.

The team is actively searching for manufacturing partners who are interested in helping to enable the mass production of the face mask diagnostic for use during the COVID-19 pandemic, as well as for detecting other biological and environmental hazards.

“This team’s ingenuity and dedication to creating a useful tool to combat a deadly pandemic while working under unprecedented conditions is impressive in and of itself. But even more impressive is that these wearable biosensors can be applied to a wide variety of health threats beyond SARS-CoV-2, and we at the Wyss Institute are eager to collaborate with commercial manufacturers to realize that potential,” said Don Ingber, M.D., Ph.D., the Wyss Institute’s Founding Director. Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.

Additional authors of the paper include Nicolaas M. Angenent-Mari and Helena de Puig from the Wyss Institute and MIT; former Wyss and MIT member Ally Huang who is now at Ampylus; Rose Lee, Shimyn Slomovic, Geoffrey Lansberry, Hani Sallum, Evan Zhao, and James Niemi from the Wyss Institute; and Tommaso Galbersanini from Dreamlux.

This research was supported by the Defense Threat Reduction Agency under grant HDTRA1-14-1-0006, the Paul G. Allen Frontiers Group, the Wyss Institute for Biologically Inspired Engineering, Harvard University, Johnson & Johnson through the J&J Lab Coat of the Future QuickFire Challenge award, CONACyT grant 342369 / 408970, and MIT-692 TATA Center fellowship 2748460.

Wielding a laser beam deep inside the body

A laser projected on a white surface
The laser steering device is able to trace complex trajectories such as an exposed wire as well as a word within geometrical shapes. Credit: Wyss Institute at Harvard University

A microrobotic opto-electro-mechanical device able to steer a laser beam with high speed and a large range of motion could enhance the possibilities of minimally invasive surgeries

By Benjamin Boettner

Minimally invasive surgeries in which surgeons gain access to internal tissues through natural orifices or small external excisions are common practice in medicine. They are performed for problems as diverse as delivering stents through catheters, treating abdominal complications, and performing transnasal operations at the skull base in patients with neurological conditions.

The ends of devices for such surgeries are highly flexible (or “articulated”) to enable the visualization and specific manipulation of the surgical site in the target tissue. In the case of energy-delivering devices that allow surgeons to cut or dry (desiccate) tissues, and stop internal bleeds (coagulate) deep inside the body, a heat-generating energy source is added to the end of the device. However, presently available energy sources delivered via a fiber or electrode, such as radio frequency currents, have to be brought close to the target site, which limits surgical precision and can cause unwanted burns in adjacent tissue sections and smoke development.

Laser technology, which already is widely used in a number of external surgeries, such as those performed in the eye or skin, would be an attractive solution. For internal surgeries, the laser beam needs to be precisely steered, positioned and quickly repositioned at the distal end of an endoscope, which cannot be accomplished with the currently available relatively bulky technology.


Responding to an unmet need for a robotic surgical device that is flexible enough to access hard to reach areas of the G.I. tract while causing minimal peripheral tissue damage, Researchers at the Wyss Institute and Harvard SEAS have developed a laser steering device that has the potential to improve surgical outcomes for patients. Credit: Wyss Institute at Harvard University

Now, robotic engineers led by Wyss Associate Faculty member Robert Wood, Ph.D., and postdoctoral fellow Peter York, Ph.D., at Harvard University’s Wyss Institute for Biologically Inspired Engineering and John A. Paulson School for Engineering and Applied Science (SEAS) have developed a laser-steering microrobot in a miniaturized 6×16 millimeter package that operates with high speed and precision, and can be integrated with existing endoscopic tools. Their approach, reported in Science Robotics, could help significantly enhance the capabilities of numerous minimally invasive surgeries.

A prototype of the laser steering device creating a star trajectory
This collage shows a prototype of the laser steering device creating a star trajectory at 5000 mm/s. Credit: Wyss Institute at Harvard University

In this multi-disciplinary approach, we managed to harness our ability to rapidly prototype complex microrobotic mechanisms…provide clinicians with a non-disruptive solution that could allow them to advance the possibilities of minimally invasive surgeries in the human body with life-altering or potentially life-saving impact.

Robert Wood

“To enable minimally invasive laser surgery inside the body, we devised a microrobotic approach that allows us to precisely direct a laser beam at small target sites in complex patterns within an anatomical area of interest,” said York, the first and corresponding author on the study and a postdoctoral fellow on Wood’s microrobotics team. “With its large range of articulation, minimal footprint, and fast and precise action, this laser-steering end-effector has great potential to enhance surgical capabilities simply by being added to existing endoscopic devices in a plug-and-play fashion.”

The team needed to overcome the basic challenges in design, actuation, and microfabrication of the optical steering mechanism that enables tight control over the laser beam after it has exited from an optical fiber. These challenges, along with the need for speed and precision, were exacerbated by the size constraints – the entire mechanism had to be housed in a cylindrical structure with roughly the diameter of a drinking straw to be useful for endoscopic procedures.

“We found that for steering and re-directing the laser beam, a configuration of three small mirrors that can rapidly rotate with respect to one another in a small ‘galvanometer’ design provided a sweet spot for our miniaturization effort,” said second author Rut Peña, a mechanical engineer with micro-manufacturing expertise in Wood’s group. “To get there, we leveraged methods from our microfabrication arsenal in which modular components are laminated step-wise onto a superstructure on the millimeter scale – a highly effective fabrication process when it comes to iterating on designs quickly in search of an optimum, and delivering a robust strategy for mass-manufacturing a successful product.”

An endoscope with laser as end-effector
The microrobotic laser-steering end-effector (on the right) can be used as a fitted add-on accessory for existing endoscopic systems (on the left) for use in minimally invasive surgery. Credit: Wyss Institute at Harvard University

The team demonstrated that their laser-steering end-effector, miniaturized to a cylinder measuring merely 6 mm in diameter and 16 mm in length, was able to map out and follow complex trajectories in which multiple laser ablations could be performed with high speed, over a large range, and be repeated with high accuracy.

To further show that the device, when attached to the end of a common colonoscope, could be applied to a life-like endoscopic task, York and Peña, advised by Wyss Clinical Fellow Daniel Kent, M.D., successfully simulated the resection of polyps by navigating their device via tele-operation in a benchtop phantom tissue made of rubber. Kent also is a resident physician in general surgery at the Beth Israel Deaconess Medical Center.

“In this multi-disciplinary approach, we managed to harness our ability to rapidly prototype complex microrobotic mechanisms that we have developed over the past decade to provide clinicians with a non-disruptive solution that could allow them to advance the possibilities of minimally invasive surgeries in the human body with life-altering or potentially life-saving impact,” said senior author Wood, Ph.D., who also is the Charles River Professor of Engineering and Applied Sciences at SEAS.

Laser inside a colon
The laser steering device performing a colonoscope demo in a life-size model of the colon. Credit: Wyss Institute at Harvard University

Wood’s microrobotics team together with technology translation experts at the Wyss Institute have patented their approach and are now further de-risking their medical technology (MedTech) as an add-on for surgical endoscopes.

“The Wyss Institute’s focus on microrobotic devices and this new laser-steering device developed by Robert Wood’s team working across disciplines with clinicians and experts in translation will hopefully revolutionize how minimally invasive surgical procedures are carried out in a number of disease areas,” said Wyss Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at SEAS.

The study was funded by the National Science Foundation under award #CMMI-1830291, and the Wyss Institute for Biologically Inspired Engineering.

Robotic swarm swims like a school of fish

A Bluebot
Bluebots are fish-shaped robots that can coordinate their movements in three dimensions underwater, rather than the two dimensions previously achieved by Kilobots. Credit: Harvard SEAS

By Leah Burrows / SEAS Communications

Schools of fish exhibit complex, synchronized behaviors that help them find food, migrate, and evade predators. No one fish or sub-group of fish coordinates these movements, nor do fish communicate with each other about what to do next. Rather, these collective behaviors emerge from so-called implicit coordination — individual fish making decisions based on what they see their neighbors doing.

This type of decentralized, autonomous self-organization and coordination has long fascinated scientists, especially in the field of robotics.

Now, a team of researchers at Harvard’s Wyss Institute and John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed fish-inspired robots that can synchronize their movements like a real school of fish, without any external control. It is the first time researchers have demonstrated complex 3D collective behaviors with implicit coordination in underwater robots.

“Robots are often deployed in areas that are inaccessible or dangerous to humans, areas where human intervention might not even be possible,” said Florian Berlinger, a Ph.D. Candidate at the Wyss Institute and SEAS and first author of the paper. “In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient. By using implicit rules and 3D visual perception, we were able to create a system that has a high degree of autonomy and flexibility underwater where things like GPS and WiFi are not accessible.”

The research is published in Science Robotics.

The fish-inspired robotic swarm, dubbed Blueswarm, was created in the lab of Wyss Associate Faculty member Radhika Nagpal, Ph.D., who is also the Fred Kavli Professor of Computer Science at SEAS. Nagpal’s lab is a pioneer in self-organizing systems, from their 1,000 robot Kilobot swarm to their termite-inspired robotic construction crew.

However, most previous robotic swarms operated in two-dimensional space. Three-dimensional spaces, like air and water, pose significant challenges to sensing and locomotion.

To overcome these challenges, the researchers developed a vision-based coordination system in their fish robots based on blue LED lights. Each underwater robot, called a Bluebot, is equipped with two cameras and three LED lights. The on-board, fisheye-lens cameras detect the LEDs of neighboring Bluebots and use a custom algorithm to determine their distance, direction and heading. Based on the simple production and detection of LED light, the researchers demonstrated that the Blueswarm could exhibit complex self-organized behaviors, including aggregation, dispersion, and circle formation.

A Blueswarm robot flashing the LEDs
These fish-inspired robots can synchronize their movements without any outside control. Based on the simple production and detection of LED light, the robotic collective exhibits complex self-organized behaviors, including aggregation, dispersion, and circle formation. Credit: Harvard University’s Self-organizing Systems Research Group

“Each Bluebot implicitly reacts to its neighbors’ positions,” said Berlinger. “So, if we want the robots to aggregate, then each Bluebot will calculate the position of each of its neighbors and move towards the center. If we want the robots to disperse, the Bluebots do the opposite. If we want them to swim as a school in a circle, they are programmed to follow lights directly in front of them in a clockwise direction.”

The researchers also simulated a simple search mission with a red light in the tank. Using the dispersion algorithm, the Bluebots spread out across the tank until one comes close enough to the light source to detect it. Once the robot detects the light, its LEDs begin to flash, which triggers the aggregation algorithm in the rest of the school. From there, all the Bluebots aggregate around the signaling robot.


Blueswarm, a Harvard Wyss- and SEAS-developed underwater robot collective, uses a 3D vision-based coordination system and 3D locomotion to coordinate the movements of its individual Bluebots autonomously, mimicking the behavior of schools of fish. Credit: Harvard SEAS

“Our results with Blueswarm represent a significant milestone in the investigation of underwater self-organized collective behaviors,” said Nagpal. “Insights from this research will help us develop future miniature underwater swarms that can perform environmental monitoring and search in visually-rich but fragile environments like coral reefs. This research also paves a way to better understand fish schools, by synthetically recreating their behavior.”

The research was co-authored by Melvin Gauci, Ph.D., a former Wyss Technology Development Fellow. It was supported in part by the Office of Naval Research, the Wyss Institute for Biologically Inspired Engineering, and an Amazon AWS Research Award.

Ultra-sensitive and resilient sensor for soft robotic systems

Sensor sleeve
Graduate student Moritz Graule demonstrates a fabric arm sleeve with embedded sensors. The sensors detect the small changes in the Graule’s forearm muscle through the fabric. Such a sleeve could be used in everything from virtual reality simulations and sportswear to clinical diagnostics for neurodegenerative diseases like Parkinson’s Disease. Credit: Oluwaseun Araromi/Harvard SEAS

By Leah Burrows / SEAS communications

Newly engineered slinky-like strain sensors for textiles and soft robotic systems survive the washing machine, cars and hammers.

Think about your favorite t-shirt, the one you’ve worn a hundred times, and all the abuse you’ve put it through. You’ve washed it more times than you can remember, spilled on it, stretched it, crumbled it up, maybe even singed it leaning over the stove once. We put our clothes through a lot and if the smart textiles of the future are going to survive all that we throw at them, their components are going to need to be resilient.

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering have developed an ultra-sensitive, seriously resilient strain sensor that can be embedded in textiles and soft robotic systems. The research is published in Nature.

“Current soft strain gauges are really sensitive but also really fragile,” said Oluwaseun Araromi, Ph.D., a Research Associate in Materials Science and Mechanical Engineering at SEAS and the Wyss Institute and first author of the paper. “The problem is that we’re working in an oxymoronic paradigm — highly sensitivity sensors are usually very fragile and very strong sensors aren’t usually very sensitive. So, we needed to find mechanisms that could give us enough of each property.”

In the end, the researchers created a design that looks and behaves very much like a Slinky.

“A Slinky is a solid cylinder of rigid metal but if you pattern it into this spiral shape, it becomes stretchable,” said Araromi. “That is essentially what we did here. We started with a rigid bulk material, in this case carbon fiber, and patterned it in such a way that the material becomes stretchable.”

The pattern is known as a serpentine meander, because its sharp ups and downs resemble the slithering of a snake. The patterned conductive carbon fibers are then sandwiched between two pre-strained elastic substrates. The overall electrical conductivity of the sensor changes as the edges of the patterned carbon fiber come out of contact with each other, similar to the way the individual spirals of a slinky come out of contact with each other when you pull both ends. This process happens even with small amounts of strain, which is the key to the sensor’s high sensitivity.

Close-up of the sensor material
A close-up view of the sensor’s patterned conductive carbon fibers. The fibers are sandwiched between two prestrained elastic substrates. The overall electrical conductivity of the sensor changes as the edges of the patterned carbon fiber come out of contact with each other. Credit: James Weaver/Harvard SEAS

Unlike current highly sensitive stretchable sensors, which rely on exotic materials such as silicon or gold nanowires, this sensor doesn’t require special manufacturing techniques or even a clean room. It could be made using any conductive material.

The researchers tested the resiliency of the sensor by stabbing it with a scalpel, hitting it with a hammer, running it over with a car, and throwing it in a washing machine ten times. The sensor emerged from each test unscathed. To demonstrate its sensitivity, the researchers embedded the sensor in a fabric arm sleeve and asked a participant to make different gestures with their hand, including a fist, open palm, and pinching motion. The sensors detected the small changes in the subject’s forearm muscle through the fabric and a machine learning algorithm was able to successfully classify these gestures.

“These features of resilience and the mechanical robustness put this sensor in a whole new camp,” said Araromi.

Such a sleeve could be used in everything from virtual reality simulations and sportswear to clinical diagnostics for neurodegenerative diseases like Parkinson’s Disease. Harvard’s Office of Technology Development has filed to protect the intellectual property associated with this project.

“The combination of high sensitivity and resilience are clear benefits of this type of sensor,” said senior author Robert Wood, Ph.D., Associate Faculty member at the Wyss Institute, and the Charles River Professor of Engineering and Applied Sciences at SEAS. “But another aspect that differentiates this technology is the low cost of the constituent materials and assembly methods. This will hopefully reduce the barriers to get this technology widespread in smart textiles and beyond.”

Sensor twist
This ultra-sensitive resilient strain sensor can be embedded in textiles and soft robotic systems. Credit: Oluwaseun Araromi/Harvard SEAS

“We are currently exploring how this sensor can be integrated into apparel due to the intimate interface to the human body it provides,” says co-author and Wyss Associate Faculty member Conor Walsh, Ph.D., who also is the Paul A. Maeder Professor of Engineering and Applied Sciences at SEAS. “This will enable exciting new applications by being able to make biomechanical and physiological measurements throughout a person’s day, not possible with current approaches.”

The combination of high sensitivity and resilience are clear benefits of this type of sensor. But another aspect that differentiates this technology is the low cost of the constituent materials and assembly methods. This will hopefully reduce the barriers to get this technology widespread in smart textiles and beyond.

Robert Wood

The research was co-authored by Moritz A. Graule, Kristen L. Dorsey, Sam Castellanos, Jonathan R. Foster, Wen-Hao Hsu, Arthur E. Passy, James C. Weaver, Senior Staff Scientist at SEAS and Joost J. Vlassak, the Abbott and James Lawrence Professor of Materials Engineering at SEAS. It was funded through the university’s strategic research alliance with Tata. The 6-year, $8.4M alliance was established in 2016 to advance Harvard innovation in fields including robotics, wearable technologies, and the internet of things (IoT).

Wearable technologies to make rehab more precise

Therapist holding patient's arm, who is wearing an intelligent wereable device
A team led by Wyss Associate Faculty member Paolo Bonato, Ph.D., found in a recent study that wearable technology is suitable to accurately track motor recovery of individuals with brain injuries and thus allow clinicians to choose more effective interventions and to improve outcomes. Credit: Shutterstock/Dmytro Zinkevych

By Tim Sullivan / Spaulding Rehabilitation Hospital Communications

A group based out of the Spaulding Motion Analysis Lab at Spaulding Rehabilitation Hospital published “Enabling Precision Rehabilitation Interventions Using Wearable Sensors and Machine Learning to Track Motor Recovery” in the newest issue of Nature Digital Medicine. The aim of the study is to lay the groundwork for the design of “precision rehabilitation” interventions by using wearable technologies to track the motor recovery of individuals with brain injury.

The study found that the technology is suitable to accurately track motor recovery and thus allow clinicians to choose more effective interventions and to improve outcomes. The study was a collaborative effort under students and former students connected to the Motion Analysis Lab under faculty mentorship.

Paolo Bonato, Ph.D., Director of the Spaulding Motion Analysis Lab and senior author on the study said, “By providing clinicians precise data will enable them to design more effective interventions to improve the care we deliver. To have so many of our talented young scientists and researchers from our lab collaborate to create this meaningful paper is especially gratifying for all of our faculty who support our ongoing research enterprise.” Bonato is also an Associate Faculty member at Harvard’s Wyss Institute for Biologically Inspired Engineering.

Catherine Adans-Dester, P.T., Ph.D., a member of Dr. Bonato’s team served as lead author on the manuscript. “The need to develop patient-specific interventions is apparent when one considers that clinical studies often report satisfactory motor gains only in a portion of participants, which suggests that clinical outcomes could be improved if we had better tools to develop patient-specific interventions. Data collected using wearable sensors provides clinicians with the opportunity to do so with little burden on clinicians and patients,” said Dr. Adans-Dester. The approach proposed in the paper relied on machine learning-based algorithms to derive clinical score estimates from wearable sensor data collected during functional motor tasks. Sensor-based score estimates showed strong agreement with those generated by clinicians.

By providing clinicians precise data will enable them to design more effective interventions to improve the care we deliver

Paolo Bonato

The results of the study demonstrated that wearable sensor data can be used to derive accurate estimates of clinical scores utilized in the clinic to capture the severity of motor impairments and the quality of upper-limb movement patterns. In the study, the upper-limb Fugl-Meyer Assessment (FMA) scale was used to generate clinical scores of the severity of motor impairments, and the Functional Ability Scale (FAS) was used to generate clinical scores of the quality of movement. Wearable sensor data (i.e., accelerometer data) was collected during the performance of eight functional motor tasks taken from the Wolf-Motor Function Test, thus providing a sample of gross arm movements and fine motor control tasks. Machine learning-based algorithms were developed to derive accurate estimates of the FMA and FAS clinical scores from the sensor data. A total of 37 study participants (16 stroke survivors and 21 traumatic brain injury survivors) participated in the study.

Involved in the study in addition to Dr. Bonato and Dr. Adans-Dester were Nicolas Hankov, Anne O’Brien, Gloria Vergara-Diaz, Randie Black-Schaffer, MD, Ross Zafonte, DO, from the Harvard Medical School Department of Physical Medicine & Rehabilitation at Spaulding Rehabilitation Hospital, Boston MA, USA, Jennifer Dy Department of Electrical and Computer Engineering, Northeastern University, Boston MA, and Sunghoon I. Lee of the College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst MA.

Cutting surgical robots down to size

By Lindsay Brownell

Minimally invasive laparoscopic surgery, in which a surgeon uses tools and a tiny camera inserted into small incisions to perform operations, has made surgical procedures safer for both patients and doctors over the last half-century. Recently, surgical robots have started to appear in operating rooms to further assist surgeons by allowing them to manipulate multiple tools at once with greater precision, flexibility, and control than is possible with traditional techniques. However, these robotic systems are extremely large, often taking up an entire room, and their tools can be much larger than the delicate tissues and structures on which they operate.

The mini-RCM is controlled by three linear actuators (mini-LAs) that allow it to move in multiple dimensions and help correct hand tremors and other disturbances during teleoperation. Credit: Wyss Institute at Harvard University

A collaboration between Wyss Associate Faculty member Robert Wood, Ph.D. and Robotics Engineer Hiroyuki Suzuki of Sony Corporation has brought surgical robotics down to the microscale by creating a new, origami-inspired miniature remote center of motion manipulator (the “mini-RCM”). The robot is the size of a tennis ball, weighs about as much as a penny, and successfully performed a difficult mock surgical task, as described in a recent issue of Nature Machine Intelligence.

“The Wood lab’s unique technical capabilities for making micro-robots have led to a number of impressive inventions over the last few years, and I was convinced that it also had the potential to make a breakthrough in the field of medical manipulators as well,” said Suzuki, who began working with Wood on the mini-RCM in 2018 as part of a Harvard-Sony collaboration. “This project has been a great success.”

A mini robot for micro tasks

To create their miniature surgical robot, Suzuki and Wood turned to the Pop-Up MEMS manufacturing technique developed in Wood’s lab, in which materials are deposited on top of each other in layers that are bonded together, then laser-cut in a specific pattern that allows the desired three-dimensional shape to “pop up,” as in a children’s pop-up picture book. This technique greatly simplifies the mass-production of small, complex structures that would otherwise have to be painstakingly constructed by hand.

The team created a parallelogram shape to serve as the main structure of the robot, then fabricated three linear actuators (mini-LAs) to control the robot’s movement: one parallel to the bottom of the parallelogram that raises and lowers it, one perpendicular to the parallelogram that rotates it, and one at the tip of the parallelogram that extends and retracts the tool in use. The result was a robot that is much smaller and lighter than other microsurgical devices previously developed in academia.

The mini-LAs are themselves marvels in miniature, built around a piezoelectric ceramic material that changes shape when an electrical field is applied. The shape change pushes the mini-LA’s “runner unit” along its “rail unit” like a train on train tracks, and that linear motion is harnessed to move the robot. Because piezoelectric materials inherently deform as they change shape, the team also integrated LED-based optical sensors into the mini-LA to detect and correct any deviations from the desired movement, such as those caused by hand tremors.

Steadier than a surgeon’s hands

To mimic the conditions of a teleoperated surgery, the team connected the mini-RCM to a Phantom Omni device, which manipulated the mini-RCM in response to the movements of a user’s hand controlling a pen-like tool. Their first test evaluated a human’s ability to trace a tiny square smaller than the tip of a ballpoint pen, looking through a microscope and either tracing it by hand, or tracing it using the mini-RCM. The mini-RCM tests dramatically improved user accuracy, reducing error by 68% compared to manual operation – an especially important quality given the precision required to repair small and delicate structures in the human body.

After the mini-RCM’s success on the tracing test, the researchers then created a mock version of a surgical procedure called retinal vein cannulation, in which a surgeon must carefully insert a needle through the eye to inject therapeutics into the tiny veins at the back of the eyeball. They fabricated a silicone tube the same size as the retinal vein (about twice the thickness of a human hair), and successfully punctured it with a needle attached to the end of the mini-RCM without causing local damage or disruption.

In addition to its efficacy in performing delicate surgical maneuvers, the mini-RCM’s small size provides another important benefit: it is easy to set up and install and, in the case of a complication or electrical outage, the robot can be easily removed from a patient’s body by hand.

“The Pop-Up MEMS method is proving to be a valuable approach in a number of areas that require small yet sophisticated machines, and it was very satisfying to know that it has the potential to improve the safety and efficiency of surgeries to make them even less invasive for patients,” said Wood, who is also the Charles River Professor of Engineering and Applied Sciences at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

The researchers aim to increase the force of the robot’s actuators to cover the maximum forces experienced during an operation, and improve its positioning precision. They are also investigating using a laser with a shorter pulse during the machining process, to improve the mini-LAs’ sensing resolution.

“This unique collaboration between the Wood lab and Sony illustrates the benefits that can arise from combining the real-world focus of industry with the innovative spirit of academia, and we look forward to seeing the impact this work will have on surgical robotics in the near future,” said Wyss Institute Founding Director Don Ingber, M.D., Ph.D., who is also the the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at SEAS.

Next-generation cockroach-inspired robot is small but mighty

The newly designed HAMR-Jr alongside its predecessor, HAMR-VI. HAMR-Jr is only slightly bigger in length and width than a penny, making it one of the smallest yet highly capable, high-speed insect-scale robots. Credit: Kaushik Jayaram/Harvard SEAS

By Leah Burrows

This itsy-bitsy robot can’t climb up the waterspout yet but it can run, jump, carry heavy payloads and turn on a dime. Dubbed HAMR-JR, this microrobot developed by researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Harvard’s Wyss Institute for Biologically Inspired Engineering, is a half-scale version of the cockroach-inspired Harvard Ambulatory Microrobot or HAMR.

About the size of a penny, HAMR-JR can perform almost all of the feats of its larger-scale predecessor, making it one of the most dexterous microrobots to date.

“Most robots at this scale are pretty simple and only demonstrate basic mobility,” said Kaushik Jayaram, Ph.D., a former postdoctoral fellow at SEAS and the Wyss Institute, and first author of the paper. “We have shown that you don’t have to compromise dexterity or control for size.”

Jayaram is currently an Assistant Professor at the University of Colorado, Boulder.

The research was presented virtually at the International Conference on Robotics and Automation (ICRA 2020) this week.

One of the big questions going into this research was whether or not the pop-up manufacturing process used to build previous versions of HAMR and other microbots, including the RoboBee, could be used to build robots at multiple scales — from tiny surgical bots to large-scale industrial robots.

PC-MEMS (short for printed circuit microelectromechanical systems) is a fabrication process in which the robot’s components are etched into a 2D sheet and then popped out in its 3D structure. To build HAMR-JR, the researchers simply shrunk the 2D sheet design of the robot — along with the actuators and onboard circuitry — to recreate a smaller robot with all the same functionalities.

“The wonderful part about this exercise is that we did not have to change anything about the previous design,” said Jayaram. “We proved that this process can be applied to basically any device at a variety of sizes.”

Next-generation cockroach-inspired robot is small but mighty
HAMR Jr. can turn right, left and move forward and backward. Credit: Kaushik Jayaram/Harvard SEAS

HAMR-JR comes in at 2.25 centimeters in body length and weighs about 0.3 grams — a fraction of the weight of an actual penny. It can run about 14 body lengths per second, making it not only one of the smallest but also one of the fastest microrobots.

Scaling down does change some of the principles governing things like stride length and joint stiffness, so the researchers also developed a model that can predict locomotion metrics like running speeds, foot forces, and payload based on a target size. The model can then be used to design a system with the required specifications.

“This new robot demonstrates that we have a good grasp on the theoretical and practical aspects of scaling down complex robots using our folding-based assembly approach,” said co-author Robert Wood, Ph.D., Charles River Professor of Engineering and Applied Sciences in SEAS and Core Faculty Member of the Wyss Institute.

This research was co-authored by Jennifer Shum, Samantha Castellanos and E. Farrell Helbling, Ph.D. This research was supported by the Defense Advanced Research Projects Agency (DARPA) and the Wyss Institute.

New study uses robots to uncover the connections between the human mind and walking control

Using a robot to disrupt the gait cycle of participants, researchers discovered that feedforward mechanisms controlled by the cerebellum and feedback mechanisms controlled at the spinal level determine how the nervous system responds to robot-induced changes in step length. Credit: Wyss Institute at Harvard University

By Tim Sullivan, Spaulding Rehabilitation Network Communications

Many of us aren’t spending much time outside lately, but there are still many obstacles for us to navigate as we walk around: the edge of the coffee table, small children, the family dog. How do our brains adjust to changes in our walking strides? Researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Motion Analysis Laboratory at Spaulding Rehabilitation Hospital used robots to try to answer that question, and discovered that mechanisms in both the cerebellum and the spinal cord determine how the nervous system responds to robot-induced changes in step length. The new study is published in the latest issue of Scientific Reports, and points the way toward improving robot-based physical rehabilitation programs for patients.

New Study Uses Robots to Uncover the Connections Between the Human Mind and Walking Control
Using a robot to disrupt the gait cycle of participants, researchers discovered that feedforward mechanisms controlled by the cerebellum and feedback mechanisms controlled at the spinal level determine how the nervous system responds to robot-induced changes in step length. Credit: Wyss Institute at Harvard University

“Our understanding of the neural mechanisms underlying locomotor adaptation is still limited. Specifically, how behavioral, functional, and physiological processes work in concert to achieve adaptation during locomotion has remained elusive to date,” said Paolo Bonato, Ph.D., an Associate Faculty member of the Wyss Institute and Director of the Spaulding Motion Analysis Lab who led the study. “Our goal is to create a better understanding of this process and hence develop more effective clinical interventions.”

For the study, the team used a robot to induce two opposite unilateral mechanical perturbations to human subjects as they were walking that affected their step length over multiple gait cycles. Electrical signals recorded from muscles were collected and analyzed to determine how muscle synergies (the activation of a group of muscles to create a specific movement) change in response to perturbation. The results revealed a combination of feedforward control signals coming from the cerebellum and feedback-driven control signals arising in the spinal cord during adaptation. The relative side-specific contributions of the two processes to motor-output adjustments, however, depended on which type of perturbation was delivered. Overall, the observations provide evidence that, in humans, both descending and afferent drives project onto the same spinal interneuronal networks that encode locomotor muscle synergies.

Researchers study how our brains adjust to changes in our walking strides, gaining insights that could be used to develop better physical rehabilitation programs. Credit: Wyss Institute.

These results mirror previous observations from animal studies, strongly suggesting the presence of a defined population of spinal interneurons regulating muscle coordination that can be accessed by both cortical and afferent drives in humans. “Our team hopes to build on this work to develop new approaches to the design of robot-assisted gait rehabilitation procedures targeting specific descending- and afferent-driven responses in muscle synergies in the coming year,” said Bonato.

The Tentacle Bot

By Leah Burrows

Of all the cool things about octopuses (and there are a lot), their arms may rank among the coolest.

Two-thirds of an octopus’s neurons are in its arms, meaning each arm literally has a mind of its own. Octopus arms can untie knots, open childproof bottles, and wrap around prey of any shape or size. The hundreds of suckers that cover their arms can form strong seals even on rough surfaces underwater.

Imagine if a robot could do all that.

Researchers have developed an octopus-inspired robot can grip, move, and manipulate a wide range of objects. Credit: Elias Knubben, Zhexin Xie, August Domel, and Li Wen

Researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering and John A. Paulson School of Engineering and Applied Sciences (SEAS) and colleagues from Beihang University have developed an octopus-inspired soft robotic arm that can grip, move, and manipulate a wide range of objects. Its flexible, tapered design, complete with suction cups, gives the gripper a firm grasp on objects of all shapes, sizes and textures — from eggs to smartphones to large exercise balls.

“Most previous research on octopus-inspired robots focused either on mimicking the suction or the movement of the arm, but not both,” said co-first author August Domel, Ph.D., a Postdoctoral Scholar at Stanford University and former graduate student at the Wyss Institute and Harvard. “Our research is the first to quantify the tapering angles of the arms and the combined functions of bending and suction, which allows for a single small gripper to be used for a wide range of objects that would otherwise require the use of multiple grippers.”

The research is published in Soft Robotics.

The researchers began by studying the tapering angle of real octopus arms and quantifying which design for bending and grabbing objects would work best for a soft robot. Next, the team looked at the layout and structure of the suckers (yes, that is the scientific term) and incorporated them into the design.

“We mimicked the general structure and distribution of these suckers for our soft actuators,” said co-first author Zhexin Xie, Ph.D., a graduate student at Beihang University. “Although our design is much simpler than its biological counterpart, these vacuum-based biomimetic suckers can attach to almost any object.”

Xie is the co-inventor of the Festo Tentacle Gripper, which is the first fully integrated implementation of this technology in a commercial prototype.

The soft robot is controlled with two valves, one to apply pressure for bending the arm and one for a vacuum that engages the suckers. By changing the pressure and vacuum, the arm can attach to any object, wrap around it, carry it, and release it. Credit: Bertoldi Lab/Harvard SEAS

Researchers control the arm with two valves, one to apply pressure for bending the arm and one as a vacuum that engages the suckers. By changing the pressure and vacuum, the arm can attach to an object, wrap around it, carry it, and release it.

The researchers successfully tested the device on many different objects, including thin sheets of plastic, coffee mugs, test tubes, eggs, and even live crabs. The tapered design also allowed the arm to squeeze into confined spaces and retrieve objects.

“The results from our study not only provide new insights into the creation of next-generation soft robotic actuators for gripping a wide range of morphologically diverse objects, but also contribute to our understanding of the functional significance of arm taper angle variability across octopus species,” said Katia Bertoldi, Ph.D., an Associate Faculty member of the Wyss Institute who is also the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS, and co-senior author of the study.

This research was also co-authored by James Weaver from the Wyss Institute, Ning An and Connor Green from Harvard SEAS, Zheyuan Gong, Tianmiao Wang, and Li Wen from Beihang University, and Elias M. Knubben from Festo SE & Co. It was supported in part by the National Science Foundation under grant DMREF-1533985 and Festo Corporate’s project division.

RoboBee powered by soft muscles

The Wyss Institute’s and SEAS robotics team built different models of the soft actuator powered RoboBee. Shown here is a four-wing, two actuator, and an eight-wing, four-actuator RoboBee model the latter of which being the first soft actuator-powered flying microrobot that is capable of controlled hovering flight. Credit: Harvard Microrobotics Lab/Harvard SEAS
By Leah Burrows

The sight of a RoboBee careening towards a wall or crashing into a glass box may have once triggered panic in the researchers in the Harvard Microrobotics Laboratory at the Harvard John A. Paulson School of Engineering and Applied Science (SEAS), but no more.

Researchers at SEAS and Harvard’s Wyss Institute for Biologically Inspired Engineering have developed a resilient RoboBee powered by soft artificial muscles that can crash into walls, fall onto the floor, and collide with other RoboBees without being damaged. It is the first microrobot powered by soft actuators to achieve controlled flight.

“There has been a big push in the field of microrobotics to make mobile robots out of soft actuators because they are so resilient,” said Yufeng Chen, Ph.D., a former graduate student and postdoctoral fellow at SEAS and first author of the paper. “However, many people in the field have been skeptical that they could be used for flying robots because the power density of those actuators simply hasn’t been high enough and they are notoriously difficult to control. Our actuator has high enough power density and controllability to achieve hovering flight.”

The research is published in Nature.

To solve the problem of power density, the researchers built upon the electrically-driven soft actuators developed in the lab of David Clarke, Ph.D., the Extended Tarr Family Professor of Materials at SEAS. These soft actuators are made using dielectric elastomers, soft materials with good insulating properties that deform when an electric field is applied.

By improving the electrode conductivity, the researchers were able to operate the actuator at 500 Hertz, on par with the rigid actuators used previously in similar robots.

Another challenge when dealing with soft actuators is that the system tends to buckle and become unstable. To solve this challenge, the researchers built a lightweight airframe with a piece of vertical constraining thread to prevent the actuator from buckling.

The soft actuators can be easily assembled and replaced in these small-scale robots. To demonstrate various flight capabilities, the researchers built several different models of the soft actuator-powered RoboBee. A two-wing model could take off from the ground but had no additional control. A four-wing, two-actuator model could fly in a cluttered environment, overcoming multiple collisions in a single flight.

“One advantage of small-scale, low-mass robots is their resilience to external impacts,” said Elizabeth Farrell Helbling, Ph.D., a former graduate student at SEAS and a coauthor on the paper. “The soft actuator provides an additional benefit because it can absorb impact better than traditional actuation strategies. This would come in handy in potential applications such as flying through rubble for search and rescue missions.”

An eight-wing, four-actuator model demonstrated controlled hovering flight, the first for a soft-actuator-powered flying microrobot.

Next, the researchers aim to increase the efficiency of the soft-powered robot, which still lags far behind more traditional flying robots.

“Soft actuators with muscle-like properties and electrical activation represent a grand challenge in robotics,” says Wyss Institute Core Faculty member Robert Wood, Ph.D., who also is the Charles River Professor of Engineering and Applied Sciences in SEAS and senior author of the paper. “If we could engineer high performance artificial muscles, the sky is the limit for what robots we could build.”

Harvard’s Office of Technology Development has protected the intellectual property relating to this project and is exploring commercialization opportunities.

This paper was also co-authored by Huichan Zhao, Jie Mao, Pakpong Chirarattananon, Nak-seung, and Patrick Hyun. It supported in part by the National Science Foundation.

Complex lattices that change in response to stimuli open a range of applications in electronics, robotics, and medicine

By Leah Burrows

What would it take to transform a flat sheet into a human face? How would the sheet need to grow and shrink to form eyes that are concave into the face and a convex nose and chin that protrude?

How to encode and release complex curves in shape-shifting structures is at the center of research led by the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Harvard’s Wyss Institute of Biologically Inspired Engineering.

Over the past decade, theorists and experimentalists have found inspiration in nature as they have sought to unravel the physics, build mathematical frameworks, and develop materials and 3D and 4D-printing techniques for structures that can change shape in response to external stimuli.

However, complex multi-scale curvature has remained out of reach.

A portrait of Carl Friedrich Gauss painted by Christian Albrecht Jensen in 1840. The researchers generated a 3D surface via an artificial intelligence algorithm. The ribs in the different layers of the lattice are programmed to grow and shrink in response to a change in temperature, mapping the curves of Gauss’ face. Images courtesy of Lori Sanders/Harvard SEAS

Now, researchers have created the most complex shape-shifting structures to date — lattices composed of multiple materials that grow or shrink in response to changes in temperature. To demonstrate their technique, team printed flat lattices that shape morph into a frequency-shifting antenna or the face of pioneering mathematician Carl Friedrich Gauss in response to a change in temperature.

The research is published in the Proceedings of the National Academy of Sciences.

“Form both enables and constrains function,” said L Mahadevan, Ph.D., the de Valpine Professor of Applied Mathematics, and Professor of Physics and Organismic and Evolutionary Biology at Harvard. “Using mathematics and computation to design form, and a combination of multi-scale geometry and multi-material printing to realize it, we are now able to build shape-shifting structures with the potential for a range of functions.”

“Together, we are creating new classes of shape-shifting matter,” said Jennifer A. Lewis, Sc.D., the Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard. “Using an integrated design and fabrication approach, we can encode complex ‘instruction sets’ within these printed materials that drive their shape-morphing behavior.”

Lewis is also a Core Faculty member of the Wyss Institute.

To create complex and doubly-curved shapes — such as those found on a face — the team turned to a bilayer, multimaterial lattice design.

“The open cells of the curved lattice give it the ability to grow or shrink a lot, even if the material itself undergoes limited extension,” said co-first author Wim M. van Rees, Ph.D., who was a postdoctoral fellow at SEAS and is now an Assistant Professor at MIT.

To achieve complex curves, growing and shrinking the lattice on its own isn’t enough. You need to be able to direct the growth locally.

“That’s where the materials palette that we’ve developed comes in,” said J. William Boley, Ph.D., a former postdoctoral fellow at SEAS and co-first author of the paper. “By printing materials with different thermal expansion behavior in pre-defined configurations, we can control the growth and shrinkage of each individual rib of the lattice, which in turn gives rise to complex bending of the printed lattice both within and out of plane.” Boley is now an Assistant Professor at Boston University.

The researchers used four different materials and programmed each rib of the lattice to change shape in response to a change in temperature. Using this method, they printed a shape-shifting patch antenna, which can change resonant frequencies as it changes shape.

To showcase the ability of the method to create a complex surface with multiscale curvature, the researchers printed the face of the 19th century mathematician who laid the foundations of differential geometry: Carl Friederich Gauss. Images courtesy of Lori Sanders/Harvard SEAS

To showcase the ability of the method to create a complex surface with multiscale curvature, the researchers decided to print a human face. They chose the face of the 19th century mathematician who laid the foundations of differential geometry: Carl Friederich Gauss. The researchers began with a 2D portrait of Gauss, painted in 1840, and generated a 3D surface using an open-source artificial intelligence algorithm. They then programmed the ribs in the different layers of the lattice to grow and shrink, mapping the curves of Gauss’ face.

This inverse design approach and multimaterial 4D printing method could be extended to other stimuli-responsive materials and be used to create scalable, reversible, shape-shifting structures with unprecedented complexity.

“Application areas include, soft electronics, smart fabrics, tissue engineering, robotics and beyond,” said Boley.

“This work was enabled by recent advances in posing and solving geometric inverse problems combined with 4D-printing technologies using multiple materials. Going forward, our hope is that this multi-disciplinary approach for shaping matter will be broadly adopted,” said Mahadevan.

This research was co-authored by Charles Lissandrello, Mark Horenstein, Ryan Truby, and Arda Kotikian. It was supported by the National Science Foundation and Draper Laboratory.

A gentle grip on gelatinous creatures

Jellyfish are about 95% water, making them some of the most diaphanous, delicate animals on the planet. But the remaining 5% of them have yielded important scientific discoveries, like green fluorescent protein (GFP) that is now used extensively by scientists to study gene expression, and life-cycle reversal that could hold the keys to combating aging. Jellyfish may very well harbor other, potentially life-changing secrets, but the difficulty of collecting them has severely limited the study of such “forgotten fauna.” The sampling tools available to marine biologists on remotely operated vehicles (ROVs) were largely developed for the marine oil and gas industries, and are much better-suited to grasping and manipulating rocks and heavy equipment than jellies, often shredding them to pieces in attempts to capture them.

A new ultra-soft gripper developed at the Wyss Institute and Baruch College uses fettuccini-like silicone “fingers” inflated with water to gently but firmly grasp jellyfish and release them without harm, allowing scientists to safely interact with these delicate creatures in their own habitats. Credit: Anand Varma

Now, a new technology developed by researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering, John A. Paulson School of Engineering and Applied Sciences (SEAS), and Baruch College at CUNY offers a novel solution to that problem in the form of an ultra-soft, underwater gripper that uses hydraulic pressure to gently but firmly wrap its fettuccini-like fingers around a single jellyfish, then release it without causing harm. The gripper is described in a new paper published in Science Robotics.

“Our ultra-gentle gripper is a clear improvement over existing deep-sea sampling devices for jellies and other soft-bodied creatures that are otherwise nearly impossible to collect intact,” said first author Nina Sinatra, Ph.D., a former graduate student in the lab of Robert Wood at the Wyss Institute. “This technology can also be extended to improve underwater analysis techniques and allow extensive study of the ecological and genetic features of marine organisms without taking them out of the water.”

The gripper’s six “fingers” are composed of thin, flat strips of silicone with a hollow channel inside bonded to a layer of flexible but stiffer polymer nanofibers. The fingers are attached to a rectangular, 3D-printed plastic “palm” and, when their channels are filled with water, curl in the direction of the nanofiber-coated side. The fingers each exert an extremely low amount of pressure – about 0.0455 kPA, or less than one-tenth of the pressure of a human’s eyelid on their eye. By contrast, current state-of-the-art soft marine grippers, which are used to capture delicate but more robust animals than jellyfish, exert about 1 kPA.

First author Nina Sinatra, Ph.D. tests the ultra-soft gripper on a jellyfish at the New England Aquarium. Credit: Anand Varma

The researchers fitted their ultra-gentle gripper to a specially created hand-held device and tested its ability to grasp an artificial silicone jellyfish in a tank of water to determine the positioning and precision required to collect a sample successfully, as well as the optimum angle and speed at which to capture a jellyfish. They then moved on to the real thing at the New England Aquarium, where they used the grippers to grab swimming moon jellies, jelly blubbers, and spotted jellies, all about the size of a golf ball.

The gripper was successfully able to trap each jellyfish against the palm of the device, and the jellyfish were unable to break free from the fingers’ grasp until the gripper was depressurized. The jellyfish showed no signs of stress or other adverse effects after being released, and the fingers were able to open and close roughly 100 times before showing signs of wear and tear.

“Marine biologists have been waiting a long time for a tool that replicates the gentleness of human hands in interacting with delicate animals like jellyfish from inaccessible environments,” said co-author David Gruber, Ph.D., who is a Professor of Biology and Environmental Science at Baruch College, CUNY and a National Geographic Explorer. “This gripper is part of an ever-growing soft robotic toolbox that promises to make underwater species collection easier and safer, which would greatly improve the pace and quality of research on animals that have been under-studied for hundreds of years, giving us a more complete picture of the complex ecosystems that make up our oceans.”

The ultra-soft gripper is the latest innovation in the use of soft robotics for underwater sampling, an ongoing collaboration between Gruber and Wyss Founding Core Faculty member Robert Wood, Ph.D. that has produced the origami-inspired RAD sampler and multi-functional “squishy fingers” to collect a diverse array of hard-to-capture organisms, including squids, octopuses, sponges, sea whips, corals, and more.

“Soft robotics is an ideal solution to long-standing problems like this one across a wide variety of fields, because it combines the programmability and robustness of traditional robots with unprecedented gentleness thanks to the flexible materials used,” said Wood, who is the co-lead of the Wyss Institute’s Bioinspired Soft Robotics Platform, the Charles River Professor of Engineering and Applied Sciences at SEAS, and a National Geographic Explorer.

“At the Wyss Institute we are always asking, ‘How can we make this better?’ I am extremely impressed by the ingenuity and out-of-the-box thinking that Rob Wood and his team have applied to solve a real-world problem that exists in the open ocean, rather than in the laboratory. This could help to greatly advance ocean science,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School, the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at SEAS.

A new ultra-soft gripper developed at the Wyss Institute uses fettuccini-like silicone “fingers” inflated with water to gently but firmly grasp jellyfish and release them without harm, allowing scientists to safely interact with these delicate creatures in their own habitats. Credit: Wyss Institute at Harvard

The team is continuing to refine the ultra-soft gripper’s design, and aims to conduct studies that evaluate the jellyfishes’ physiological response to being held by the gripper, to more definitively prove that they do not cause the animals stress. Wood and Gruber are also co-Principal Investigators of the Schmidt Ocean Institute’s “Designing the Future” project, and will be further testing their various underwater robots on an upcoming expedition aboard the research ship Falkor in 2020.

Additional authors of the paper are Clark Teeple, Daniel Vogt, M.S., and Kevin Kit Parker, Ph.D. from the Wyss Institute and Harvard SEAS. Parker is a Founding Core Faculty member of the Wyss Institute and the Tarr Family Professor of Bioengineering and Applied Physics at SEAS. The research was supported by the National Science Foundation, The Harvard University Materials Research Science and Engineering Center, The National Academies Keck Futures Initiative, and the National Geographic Society.

Suit up with a robot to walk AND run more easily

The light-weight versatile exosuit assists hip extension during uphill walking and at different running speeds in natural terrain. Credit: Wyss Institute at Harvard University

By Benjamin Boettner

Between walking at a leisurely pace and running for your life, human gaits can cover a wide range of speeds. Typically, we choose the gait that allows us to consume the least amount of energy at a given speed. For example, at low speeds, the metabolic rate of walking is lower than that of running in a slow jog; vice versa at high speeds, the metabolic cost of running is lower than that of speed walking.

Researchers in academic and industry labs have previously developed robotic devices for rehabilitation and other areas of life that can either assist walking or running, but no untethered portable device could efficiently do both. Assisting walking and running with a single device is challenging because of the fundamentally different biomechanics of the two gaits. However, both gaits have in common an extension of the hip joint, which starts around the time when the foot comes in contact with the ground and requires considerable energy for propelling the body forward.

As reported today in Science, a team of researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering and John A. Paulson School of Engineering and Applied Sciences (SEAS), and the University of Nebraska Omaha now has developed a portable exosuit that assists with gait-specific hip extension during both walking and running. Their lightweight exosuit is made of textile components worn at the waist and thighs, and a mobile actuation system attached to the lower back which is controlled by an algorithm that can robustly detect the transition from walking to running and vice versa.

The team first showed that the exosuit worn by users in treadmill-based indoor tests, on average, reduced their metabolic costs of walking by 9.3% and of running by 4% compared to when they were walking and running without the device. “We were excited to see that the device also performed well during uphill walking, at different running speeds and during overground testing outside, which showed the versatility of the system,” said Conor Walsh, Ph.D., who led the study. Walsh is a Core Faculty member of the Wyss Institute, the Gordon McKay Professor of Engineering and Applied Sciences at SEAS, and Founder of the Harvard Biodesign Lab. “While the metabolic reductions we found are modest, our study demonstrates that it is possible to have a portable wearable robot assist more than just a single activity, helping to pave the way for these systems to become ubiquitous in our lives,” said Walsh.

The hip exosuit was developed as part of the Defense Advanced Research Projects Agency (DARPA)’s former Warrior Web program and is the culmination of years of research and optimization of the soft exosuit technology by the team. A previous multi-joint exosuit developed by the team could assist both the hip and ankle during walking, and a medical version of the exosuit aimed at improving gait rehabilitation for stroke survivors is now commercially available in the US and Europe, via a collaboration with ReWalk Robotics.

The team’s most recent hip-assisting exosuit is designed to be simpler and lighter weight compared to their past multi-joint exosuit. It assists the wearer via a cable actuation system. The actuation cables apply a tensile force between the waist belt and thigh wraps to generate an external extension torque at the hip joint that works in concert with the gluteal muscles. The device weighs 5kg in total with more than 90% of its weight located close to the body’s center of mass. “This approach to concentrating the weight, combined with the flexible apparel interface, minimizes the energetic burden and movement restriction to the wearer,” said co-first-author Jinsoo Kim, a SEAS graduate student in Walsh’s group. “This is important for walking, but even more so for running as the limbs move back and forth much faster.” Kim shared the first-authorship with Giuk Lee, Ph.D., a former postdoctoral fellow on Walsh’s team and now Assistant Professor at Chung-Ang University in Seoul, South Korea.

A major challenge the team had to solve was that the exosuit needed to be able to distinguish between walking and running gaits and change its actuation profiles accordingly with the right amount of assistance provided at the right time of the gait cycle.

To explain the different kinetics during the gait cycles, biomechanists often compare walking to the motions of an inverted pendulum and running to the motions of a spring-mass system. During walking, the body’s center of mass moves upward after heel-strike, then reaches maximum height at the middle of the stance phase to descend towards the end of the stance phase. In running, the movement of the center of mass is opposite. It descends towards a minimum height at the middle of the stance phase and then moves upward towards push-off.

“We took advantage of these biomechanical insights to develop our biologically inspired gait classification algorithm that can robustly and reliably detect a transition from one gait to the other by monitoring the acceleration of an individual’s center of mass with sensors that are attached to the body,” said co-corresponding author Philippe Malcolm, Ph.D., Assistant Professor at University of Nebraska Omaha. “Once a gait transition is detected, the exosuit automatically adjusts the timing of its actuation profile to assist the other gait, as we demonstrated by its ability to reduce metabolic oxygen consumption in wearers.”

In ongoing work, the team is focused on optimizing all aspects of the technology, including further reducing weight, individualizing assistance and improving ease of use. “It is very satisfying to see how far our approach has come,” said Walsh, “and we are excited to continue to apply it to a range of applications, including assisting those with gait impairments, industry workers at risk of injury performing physically strenuous tasks, or recreational weekend warriors.”

The team’s portable exosuit is made of textile components worn at the waist and thighs, and a mobile actuation system attached to the lower back which uses an algorithm that robustly predicts transitions between walking and running gaits. Credit: Wyss Institute at Harvard University

“This breakthrough study coming out of the Wyss Institute’s Bioinspired Soft Robotics platform gives us a glimpse into a future where wearable robotic devices can improve the lives of the healthy, as well as serve those with injuries or in need of rehabilitation,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School, the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at SEAS.

Other authors on the study are past and present members of Walsh’s team, including data analyst Roman Heimgartner; Research Fellow Dheepak Arumukhom Revi; Control Engineer Nikos Karavas, Ph.D.; Functional Apparel Designer Danielle Nathanson; Robotics Engineer Ignacio Galiana, Ph.D.; Robotics Engineer Asa Eckert-Erdheim; Electromechanical Engineer Patrick Murphy; Engineer David Perry; Software Engineer Nicolas Menard, and graduate student Dabin Kim Choe. The study was funded by the Defense Advanced Research Projects Agency’s Warrior Web Program, the National Science Foundation and Harvard’s Wyss Institute for Biologically Inspired Engineering.

Page 1 of 3
1 2 3