Page 1 of 3
1 2 3

Fiber-infused ink enables 3D-printed heart muscle to beat

This illustration shows a 3D printed heart ventricle engineered with fiber-infused ink. Credit: Harvard SEAS

By Kat J. McAlpine / SEAS Communications

Over the last decade, advances in 3D printing have unlocked new possibilities for bioengineers to build heart tissues and structures. Their goals include creating better in vitro platforms for discovering new therapeutics for heart disease, the leading cause of death in the United States, responsible for about one in every five deaths nationally, and using 3D-printed cardiac tissues to evaluate which treatments might work best in individual patients. A more distant aim is to fabricate implantable tissues that can heal or replace faulty or diseased structures inside a patient’s heart.

In a paper published in Nature Materials, researchers from Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering at Harvard University report the development of a new hydrogel ink infused with gelatin fibers that enables 3D printing of a functional heart ventricle that mimics beating like a human heart. They discovered the fiber-infused gel (FIG) ink allows heart muscle cells printed in the shape of a ventricle to align and beat in coordination like a human heart chamber.

“People have been trying to replicate organ structures and functions to test drug safety and efficacy as a way of predicting what might happen in the clinical setting,” says Suji Choi, research associate at SEAS and first author on the paper. But until now, 3D printing techniques alone have not been able to achieve physiologically-relevant alignment of cardiomyocytes, the cells responsible for transmitting electrical signals in a coordinated fashion to contract heart muscle.

“We started this project to address some of the inadequacies in 3D printing of biological tissues.”

– Kevin “Kit” Parker

The innovation lies in the addition of fibers within a printable ink. “FIG ink is capable of flowing through the printing nozzle but, once the structure is printed, it maintains its 3D shape,” says Choi. “Because of those properties, I found it’s possible to print a ventricle-like structure and other complex 3D shapes without using extra support materials or scaffolds.”


This video shows the spontaneous beating of a 3D-printed heart muscle. Credit: Harvard SEAS.

To create the FIG ink, Choi leveraged a rotary jet spinning technique developed in the lab of Kevin “Kit” Parker, Ph.D. that fabricates microfiber materials using an approach similar to the way cotton candy is spun. Postdoctoral researcher and Wyss Lumineer Luke MacQueen, a co-author on the paper, proposed the idea that fibers created by the rotary jet spinning technique could be added to an ink and 3D printed. Parker is a Wyss Associate Faculty member and the Tarr Family Professor of Bioengineering and Applied Physics at SEAS.

“When Luke developed this concept, the vision was to broaden the range of spatial scales that could be printed with 3D printers by dropping the bottom out of the lower limits, taking it down to the nanometer scale,” Parker says. “The advantage of producing the fibers with rotary jet spinning rather than electrospinning” – a more conventional method for generating ultrathin fibers – “is that we can use proteins that would otherwise be degraded by the electrical fields in electrospinning.”

Using the rotary jet to spin gelatin fibers, Choi produced a sheet of material with a similar appearance to cotton. Next, she used sonification – sound waves – to break that sheet into fibers about 80 to 100 micrometers long and about 5 to 10 micrometers in diameter. Then, she dispersed those fibers into a hydrogel ink.

“This concept is broadly applicable – we can use our fiber-spinning technique to reliably produce fibers in the lengths and shapes we want.”

– Suji Choi

The most difficult aspect was troubleshooting the desired ratio between fibers and hydrogel in the ink to maintain fiber alignment and the overall integrity of the 3D-printed structure.

As Choi printed 2D and 3D structures using FIG ink, the cardiomyocytes lined up in tandem with the direction of the fibers inside the ink. By controlling the printing direction, Choi could therefore control how the heart muscle cells would align.

The tissue-engineered 3D ventricle model. Credit: Harvard SEAS

When she applied electrical stimulation to 3D-printed structures made with FIG ink, she found it triggered a coordinated wave of contractions in alignment with the direction of those fibers. In a ventricle-shaped structure, “it was very exciting to see the chamber actually pumping in a similar way to how real heart ventricles pump,” Choi says.

As she experimented with more printing directions and ink formulas, she found she could generate even stronger contractions within ventricle-like shapes.

“Compared to the real heart, our ventricle model is simplified and miniaturized,” she says. The team is now working toward building more life-like heart tissues with thicker muscle walls that can pump fluid more strongly. Despite not being as strong as real heart tissue, the 3D-printed ventricle could pump 5-20 times more fluid volume than previous 3D-printed heart chambers.

The team says the technique can also be used to build heart valves, dual-chambered miniature hearts, and more.

“FIGs are but one tool we have developed for additive manufacturing,” Parker says. “We have other methods in development as we continue our quest to build human tissues for regenerative therapeutics. The goal is not to be tool driven – we are tool agnostic in our search for a better way to build biology.”

Additional authors include Keel Yong Lee, Sean L. Kim, Huibin Chang, John F. Zimmerman, Qianru Jin, Michael M. Peters, Herdeline Ann M. Ardoña, Xujie Liu, Ann-Caroline Heiler, Rudy Gabardi, Collin Richardson, William T. Pu, and Andreas Bausch.

This work was sponsored by SEAS; the National Science Foundation through the Harvard University Materials Research Science and Engineering Center (DMR-1420570, DMR-2011754); the National Institutes of Health and National Center for Advancing Translational Sciences (UH3HL141798, 225 UG3TR003279); the Harvard University Center for Nanoscale Systems (CNS), a member of the National Nanotechnology Coordinated Infrastructure Network (NNCI) which is supported by the National Science Foundation (ECCS-2025158, S10OD023519); and the American Chemical Society’s Irving S. Sigal Postdoctoral Fellowships.

Adama Sesay on solving problems with sensors and microsystems

If you had asked Adama Sesay as a child what she wanted to be when she grew up, the answer would have been a doctor, an architect, and a firefighter. Now a Senior Engineer specializing in sensors and microsystems, you may think she’s gone in a completely different direction, but by following the passions that led her to those ideas – science, design, and saving lives – she’s found a career she loves. At the Wyss, Adama is a member of the Advanced Technology Team and works on a wide range of projects that span from sensor-integrated Organ Chips to make drugs safer to an enzyme that converts sugar to fiber to make food healthier, while simultaneously leading the Women’s Health Catalyst. Learn more about Adama and her work in this month’s Humans of the Wyss.

What projects are you involved with?

I specialize in biosensing, microfluidics, and microsystems, and my projects span over quite a diverse area. The first project I’ve been managing is a BARDA project, which is a federally funded project looking at integrating sensors to measure biomarkers like cytokines, from a lymph node tissue model, or a lymphoid follicle (LF) Chip. In this project, I’ve mostly concentrated on the instrumentation side, providing the actual hardware (which is a sort of sensor-integrated cartridge) and retrofitting it into a commercial Organ Chip system.

Adama Sesay, Senior Engineer II. Credit: Wyss Institute at Harvard University

Then I have another project where we’re developing an enzyme-encapsulated particle that reduces sugar in food once it’s consumed, converting it to dietary fiber. Basically, this would be a “smart food” ingredient, where the enzyme is only activated once you consume it. That way, the food tastes the same, but the actual amount of sugar your body metabolizes is lower.

I’m working on a third project where we are developing and microfabricating a microfluidic Blood Clotting Chip to study clotting time for patients that have mesothelioma, a cancer caused by exposure to asbestos. We’re collaborating with Massachusetts General Hospital and Boston Children’s Hospital.

What are biosensors, microfluidics, and microsystems?

A biosensor is a device that combines a biological component with a sensor transducer and can measure a biological or chemical reaction by producing signals to indicate the concentration of the analyte, or component of interest, in the monitored sample. Microfluidics refers to a system that has small channels that can move and deliver low volumes of fluid. The concept is that fabrication-wise, a microfluidic channel is anything that has dimensions in the micrometer range. The advantage of microfluidics is that you can deliver very low volumes to different areas and manipulate those flows a classic example is a an Organ Chip. A microsystem device in this context takes it a bit further and is the integration of sensors, microfluidics, and application. The three are a closely integrated package.

What real-world problems do these projects address?

With the BARDA project, we can use the LF Chips to monitor the immune system’s reaction to different types of drugs. We can use patient samples to get time resolved data about the inflammation response. In addition to helping screen drugs for safety, this could help us determine which therapies can be used on immuno-compromised patients or what a vaccine response will be in a certain population.

This illustration demonstrates the structure of the LF Chip that Adama is working on. Credit: Wyss Institute at Harvard University

The sugar fiber project will help address America’s ever-growing problems with obesity and diabetes. Despite these issues, there is a big food industry here that relies on refined sugars, especially high fructose corn syrup. In addition to those other issues, high fructose diets contribute to metabolic syndrome. Plus, the American diet is low in fiber. We started this project looking at how to make food more enjoyable while also being responsible. Our enzyme encapsulation will hopefully address diabetes and metabolic syndrome, while increasing fiber, which will make people’s gut microbiomes healthier.

We hope to use the Blood Clotting Chip to understand the clotting time and the thrombosis factors of mesothelioma. It can also be used as a diagnostic tool. Understanding a patient’s blood clotting factor is essential when they go into surgery, even beyond those suffering from this disease. This became even more apparent to me recently when my father needed to have emergency surgery, but they had to wait until he could be off blood thinners for a period of time. If we could use this as a diagnostic test, surgeons would know when a patient’s clotting factor was such that they were ready for surgery.

What is your specific role on the team?

I’m a Senior Engineer here and part of the Advanced Technology Team, I lead the biosensing, microfluidics, and microsystems effort at the Wyss. I am also responsible for the microfabrication room and efforts, and work closely with Pawan Jolly, who is the lead on sensors. That entails but is not limited to research project management, writing funding proposals, mentorship, and overseeing relationships with internal and external collaborator.

How are you helping to advance women’s health at the Wyss?

One of my biggest interests at the moment is to build up the Women’s Health Catalyst. In a place like the Wyss that’s looking at unmet needs, it’s natural that we have quite a lot of projects already in our pipeline dedicated to women’s health because therapeutics and diagnostics specifically aimed at women’s health issues are one of the biggest unmet clinical needs in the world. All this work is being done within our existing Focus Areas. Many of our researchers are incredibly dedicated to increasing our knowledge and finding real-world solutions.

Adama and the other speakers at the Wyss’ event celebrating Women at the Intersection of Science and Art on International Women’s Day. Credit: Wyss Institute at Harvard University

So, right now we’re aiming to coalesce all these projects to bring together our brilliant scientists, clinicians, and technology teams to advance research and make drugs and devices to help people. We aim to be able to highlight these projects to attract external collaborators to work with our Wyss technology translation engine, and one day become a world-class beacon where people want to come and really make advances in women’s health.

How are you helping to bridge the gap between academia and industry at the Wyss?

I have a diverse group of researchers on my team including biologists, biotechnologists, biomedical engineers, and mechanical engineers who look at challenges very differently, while I look at the industrial need and see how we can translate the science into something to address the gaps. I think what it boils down to is facilitating the communication between scientists and engineers on the research side and translating that acquired knowledge into know-how, services, and products on the business and industrial sides.

“I think what it boils down to is facilitating the communication between scientists and engineers on the research side and translating that acquired knowledge into know-how, services, and products on the business and industrial sides.”

– Adama Sesay

For example, if I’m designing a diagnostic device, I will listen to the scientists about how the fundamental biology works in their system and use my experience in sensor development, microsystems, and developing point-of-care devices to speak to more practically minded engineers about how to build the device, finding a common language between the two. Then, we need to communicate why this device is useful to a business audience in order to successfully commercialize it.

What brought you to the Wyss?

I wanted to be in a place that was busy doing what I had been doing for a while in Europe, which is translational science. The first place on my wish list was the Wyss Institute. I loved the work going on here; the organs-on-chips and the translational nature of the place. It’s quite unique in its structure. So, I got in touch with people working here, especially in Donald Ingber’s lab, and I was lucky that there was a position open when I applied.

Members of Don Ingber’s lab, including Adama, at the Wyss Retreat in 2022. Credit: Wyss Institute at Harvard University

How has your previous work experience shaped your approach to your work today?

Starting with my master’s and Ph.D., much of my work has focused on technology transfer. It’s shaped my approach to work because it has taught me to talk to different people, bring various viewpoints and skills together, really listen to where the problems are, and find solutions. I think sometimes, especially earlier in your career, it’s easy to think that your idea is brilliant, but at the end of the day, it might be a great technology that’s hard to translate into a product. I’ve learned that you need to take your ego out of it, listen, and find the best way forward, even if it isn’t your way. Having a critical mass of new knowledge around you means you’ll always be at the forefront; you just have to be open to trying new things and making the sum of the parts better than the individual pieces.

What is your biggest piece of advice for an academic scientist looking to translate their technology?

“Maintain a level of curiosity and wonder. Be prepared to keep on improving and learning.”

– Adama Sesay

Maintain a level of curiosity and wonder. Be prepared to keep on improving and learning. Don’t be discouraged if you get knocked back, because even if your first approach doesn’t work, it’s because you go through that and you’re willing to get back up again that you will succeed.

What inspired you to get into this field?

If you had asked me what I wanted to be when I was a kid, I would always say a doctor, an architect, or a firefighter. A doctor because I really liked science and I didn’t know there was anything else out there other than that. My parents were in the medical field, so I thought that was it. An architect because I liked art, and I love buildings. I thought architecture was the practical way to apply that. I was unaware there was a profession called an engineer. And a firefighter because I enjoy being active and I thought they were so heroic. I just admired them.

I realized very quickly that none of those things were exactly for me, but I followed the passions that led me to those ideas – science, design, and saving lives – and by doing what I love I found my way to a career in translational research focused on sensors and microsystems. If you really enjoy what you do, it doesn’t feel like a job.

What continues to motivate you?

Making a difference and working with a great team in an amazing work environment. I think that knowing that the people I’m working alongside are truly having an impact, even if they’re not on my project directly, is very inspiring. It makes me feel that I’m a part of something that can cause positive change in my lifetime.

“I think that knowing that the people I’m working alongside are truly having an impact, even if they’re not on my project directly, is very inspiring. It makes me feel that I’m a part of something that can cause positive change in my lifetime.”

– Adama Sesay

When not at the Wyss, how do you like to spend your time?

I like roller skating. I started playing my clarinet again, which I used to do when I was a teenager, and that’s given me a lot of joy. I also like watching films. My favorite recent films have been Everything, Everywhere, All at Once and The Woman King. Everything, Everywhere, All at Once manages to be light while also touching some quite thought-provoking concepts. I love the types of films that you can spend time talking about. The Woman King, while it has faced some criticism for being inaccurate, opens a dialogue about African history on a world stage between individuals whom audiences in the west have never known or even wondered about it. Although some of these discussions might be uncomfortable, at least people are beginning to have them. Again, I like a film that starts a conversation.

What’s something unique about you that someone wouldn’t know from your resume?

My mother suffered from Alzheimer’s disease, and it finally took her this past Christmas. In her memory, my sister and I are working towards building a smart city in her village in Sierra Leone. To do this, we’re raising awareness and funding to build an agricultural school for women and empower them to harvest crops based on new technology that’s sustainable and appropriate for the land, given that it’s a wildlife sanctuary area, and create businesses from farming. Hopefully, by next year we can start working on the curriculum for the school. We’re putting a lot of work into this, but we think it’s a great way to honor our mother’s legacy and enable women to get out of poverty and become future entrepreneurs.

What does it feel like to be working towards translating cutting-edge technology that has the potential to have a real and significant impact on people’s lives and society?

It feels great to be part of such a dynamic environment. I think as an engineer and a technology transfer specialist, it’s the best of all worlds. I’m lucky enough to have worked at some exceptional institutes in some amazing countries, but the Wyss is quite special in that we have a critical mass of world-class, high-impact projects ripe for translation. I’m in my fifth year now and it’s been a great ride so far. I’m looking forward to what comes next.

Sensing Parkinson’s symptoms

MyoExo integrates a series of sensors into a wearable device capable of detecting slight changes in muscle strain and bulging, enabling it to measure and track the symptoms of Parkinson’s disease. Credit: Oluwaseun Araromi

By Matthew Goisman/SEAS Communications

Nearly one million people in the United States live with Parkinson’s disease. The degenerative condition affects the neurons in the brain that produce the neurotransmitter dopamine, which can impact motor function in multiple ways, including muscle tremors, limb rigidity and difficulty walking.

There is currently no cure for Parkinson’s disease, and current treatments are limited by a lack of quantitative data about the progress of the disease.

MyoExo, a translation-focused research project based on technology developed at the Harvard John A. Paulson School of Engineering (SEAS) and the Wyss Institute for Biologically Inspired Engineering, aims to provide that data. The team is refining the technology and starting to develop a business plan as part of the Harvard Innovation Lab’s venture program. The MyoExo wearable device aims to not only provide a remote monitoring device for patients at-home setting but also be sensitive enough to aid early diagnostics of Parkinson’s disease.

“This is a disease that’s affecting a lot of people and it seems like the main therapeutics that tackle this have not changed significantly in the past several decades,” said Oluwaseun Araromi, Research Associate in Materials Science and Mechanical Engineering at SEAS and the Wyss Institute.

The MyoExo technology consists of a series of wearable sensors, each one capable of detecting slight changes in muscle strain and bulging. When integrated into a wearable device, the data can provide what Araromi described as “muscle-centric physiological signatures.”

“The enabling technology underlying this is a sensor that detects small changes in the shape of an object,” he said. “Parkinson’s disease, especially in its later stages, really expresses itself as a movement disorder, so sensors that can detect shape changes can also detect changes in the shape of muscle as people move.”

MyoExo emerged from research done in the Harvard Biodesign Lab of Conor Walsh, the Paul A. Maeder Professor of Engineering and Applied Sciences, and the Microrobotics Lab of Rob Wood, the Charles River Professor of Engineering and Applied Sciences at SEAS. Araromi, Walsh and Wood co-authored a paper on their research into resilient wearable sensors in November 2020, around the same time the team began to focus on medical applications of the technology.

“If we had these hypersensitive sensors in something that a person was wearing, we could detect how their muscles were bulging,” Walsh said. “That was more application-agnostic. We didn’t know exactly where that would be the most important, and I credit Seun and our Wyss collaborators for being the ones to think about identifying Parkinson’s applications.”

Araromi sees the MyoExo technology as having value for three major stakeholders: the pharmaceutical industry, clinicians and physicians, and patients. Pharmaceutical companies could use data from the wearable system to quantify their medications’ effect on Parkinson’s symptoms, while clinicians could determine if one treatment regimen is more effective than another for a specific patient. Patients could use the system to track their own treatment, whether that’s medication, physical therapy, or both.

“Some patients are very incentivized to track their progress,” Araromi said. “They want to know that if they were really good last week and did all of the exercises that they were prescribed, their wearable device would tell them their symptomatology has reduced by 5% compared to the week before. We envision that as something that would really encourage individuals to keep and adhere to their treatment regiments.”

MyoExo’s sensor technology is based on research conducted in the Harvard Biodesign Lab of Conor Walsh and the Microrobotics Lab of Rob Wood at SEAS, and further developed through the Wyss Institute for Biologically Inspired Engineering and Harvard Innovation Labs venture program. Credit: Oluwaseun Araromi

Araromi joined SEAS and the Wyss Institute as a postdoctoral researcher in 2016, having earned a Ph.D in mechanical engineering from the University of Bristol in England and completed a postdoc at the Swiss Federal Institute of Technology Lausanne.

His interest in sensor technology made him a great fit for research spanning the Biodesign and Microrobotics labs, and his early work included helping develop an exosuit to aid with walking.

“I was initially impressed with Seun’s strong background in materials, transduction and physics,” Walsh said. “He really understood how you’d think about creating novel sensors with soft materials. Seun’s really the translation champion for the project in terms of driving forward the technology, but at the same time trying to think about the need in the market, and how we demonstrate that we can meet that.”

The technology is currently in the human testing phase to demonstrate proof of concept detection of clinically-relevant metrics with support from the Wyss Institute Validation Project program. Araromi wants to show that the wearable device can quantify the difference between the muscle movements of someone with Parkinson’s and someone without. From there, the goal is to demonstrate that the device can quantify whether a person has early- or late-stage symptoms of the disease, as well as their response to treatment.

“We are evaluating our technology and validating our technical approach, making sure that as it’s currently constructed, even in this crude form, we can get consistent data and results,” Araromi said. “We’re doing this in a small pilot phase, such that if there are issues, we can fix those issues, and then expand to a larger population where we would test our device on more individuals with Parkinson’s disease. That should really convince ourselves and hopefully the community that we are able to reach a few key technical milestones, and then garner more interest and potentially investment and partnership.”

Team builds first living robots that can reproduce

AI-designed (C-shaped) organisms push loose stem cells (white) into piles as they move through their environment. Credit: Douglas Blackiston and Sam Kriegman

By Joshua Brown, University of Vermont Communications

To persist, life must reproduce. Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses.

Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction—and applied their discovery to create the first-ever, self-replicating living robots.

The same team that built the first living robots (“Xenobots,” assembled from frog cells—reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble “baby” Xenobots inside their Pac-Man-shaped “mouth”—that, a few days later, become new Xenobots that look and move just like themselves.

And then these new Xenobots can go out, find cells, and build copies of themselves. Again and again.

“With the right design—they will spontaneously self-replicate,” says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research.

The results of the new research were published in the Proceedings of the National Academy of Sciences.

Into the Unknown

In a Xenopus laevis frog, these embryonic cells would develop into skin. “They would be sitting on the outside of a tadpole, keeping out pathogens and redistributing mucus,” says Michael Levin, Ph.D., a professor of biology and director of the Allen Discovery Center at Tufts University and co-leader of the new research. “But we’re putting them into a novel context. We’re giving them a chance to reimagine their multicellularity.” Levin is also an Associate Faculty member at the Wyss Institute.

As Pac-man-shaped Xenobot “parents” move around their environment, they collect loose stem cells in their “mouths” that, over time, aggregate to create “offspring” Xenobots that develop to look just like their creators. Credit: Doug Blackiston and Sam Kriegman

And what they imagine is something far different than skin. “People have thought for quite a long time that we’ve worked out all the ways that life can reproduce or replicate. But this is something that’s never been observed before,” says co-author Douglas Blackiston, Ph.D., the senior scientist at Tufts University and the Wyss Institute who assembled the Xenobot “parents” and developed the biological portion of the new study.

“This is profound,” says Levin. “These cells have the genome of a frog, but, freed from becoming tadpoles, they use their collective intelligence, a plasticity, to do something astounding.” In earlier experiments, the scientists were amazed that Xenobots could be designed to achieve simple tasks. Now they are stunned that these biological objects—a computer-designed collection of cells—will spontaneously replicate. “We have the full, unaltered frog genome,” says Levin, “but it gave no hint that these cells can work together on this new task,” of gathering and then compressing separated cells into working self-copies.

“These are frog cells replicating in a way that is very different from how frogs do it. No animal or plant known to science replicates in this way,” says Sam Kriegman, Ph.D., the lead author on the new study, who completed his Ph.D. in Bongard’s lab at UVM and is now a post-doctoral researcher at Tuft’s Allen Center and Harvard University’s Wyss Institute for Biologically Inspired Engineering.

On its own, the Xenobot parent, made of some 3,000 cells, forms a sphere. “These can make children but then the system normally dies out after that. It’s very hard, actually, to get the system to keep reproducing,” says Kriegman. But with an artificial intelligence program working on the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core, an evolutionary algorithm was able to test billions of body shapes in simulation—triangles, squares, pyramids, starfish—to find ones that allowed the cells to be more effective at the motion-based “kinematic” replication reported in the new research.

“We asked the supercomputer at UVM to figure out how to adjust the shape of the initial parents, and the AI came up with some strange designs after months of chugging away, including one that resembled Pac-Man,” says Kriegman. “It’s very non-intuitive. It looks very simple, but it’s not something a human engineer would come up with. Why one tiny mouth? Why not five? We sent the results to Doug and he built these Pac-Man-shaped parent Xenobots. Then those parents built children, who built grandchildren, who built great-grandchildren, who built great-great-grandchildren.” In other words, the right design greatly extended the number of generations.

Kinematic replication is well-known at the level of molecules—but it has never been observed before at the scale of whole cells or organisms.

An AI-designed “parent” organism (C shape; red) beside stem cells that have been compressed into a ball (“offspring”; green). Credit: Douglas Blackiston and Sam Kriegman

“We’ve discovered that there is this previously unknown space within organisms, or living systems, and it’s a vast space,” says Bongard. “How do we then go about exploring that space? We found Xenobots that walk. We found Xenobots that swim. And now, in this study, we’ve found Xenobots that kinematically replicate. What else is out there?”

Or, as the scientists write in the Proceedings of the National Academy of Sciences study: “life harbors surprising behaviors just below the surface, waiting to be uncovered.”

Responding to Risk

Some people may find this exhilarating. Others may react with concern, or even terror, to the notion of a self-replicating biotechnology. For the team of scientists, the goal is deeper understanding.

“We are working to understand this property: replication. The world and technologies are rapidly changing. It’s important, for society as a whole, that we study and understand how this works,” says Bongard. These millimeter-sized living machines, entirely contained in a laboratory, easily extinguished, and vetted by federal, state and institutional ethics experts, “are not what keep me awake at night. What presents risk is the next pandemic; accelerating ecosystem damage from pollution; intensifying threats from climate change,” says UVM’s Bongard. “This is an ideal system in which to study self-replicating systems. We have a moral imperative to understand the conditions under which we can control it, direct it, douse it, exaggerate it.”

Bongard points to the COVID epidemic and the hunt for a vaccine. “The speed at which we can produce solutions matters deeply. If we can develop technologies, learning from Xenobots, where we can quickly tell the AI: ‘We need a biological tool that does X and Y and suppresses Z,’ —that could be very beneficial. Today, that takes an exceedingly long time.” The team aims to accelerate how quickly people can go from identifying a problem to generating solutions—”like deploying living machines to pull microplastics out of waterways or build new medicines,” Bongard says.

“We need to create technological solutions that grow at the same rate as the challenges we face,” Bongard says.

And the team sees promise in the research for advancements toward regenerative medicine. “If we knew how to tell collections of cells to do what we wanted them to do, ultimately, that’s regenerative medicine—that’s the solution to traumatic injury, birth defects, cancer, and aging,” says Levin. “All of these different problems are here because we don’t know how to predict and control what groups of cells are going to build. Xenobots are a new platform for teaching us.”

The scientists behind the Xenobots participated in a live panel discussion on December 1, 2021 to discuss the latest developments in their research. Credit: Wyss Institute at Harvard University

Face masks that can diagnose COVID-19

By Lindsay Brownell

Most people associate the term “wearable” with a fitness tracker, smartwatch, or wireless earbuds. But what if cutting-edge biotechnology were integrated into your clothing, and could warn you when you were exposed to something dangerous?

A team of researchers from the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Massachusetts Institute of Technology has found a way to embed synthetic biology reactions into fabrics, creating wearable biosensors that can be customized to detect pathogens and toxins and alert the wearer.

The team has integrated this technology into standard face masks to detect the presence of the SARS-CoV-2 virus in a patient’s breath. The button-activated mask gives results within 90 minutes at levels of accuracy comparable to standard nucleic acid-based diagnostic tests like polymerase chain reactions (PCR). The achievement is reported in Nature Biotechnology.

The wFDCF face mask can be integrated into any standard face mask. The wearer pushes a button on the mask that releases a small amount of water into the system, which provides results within 90 minutes. Credit: Wyss Institute at Harvard University

“We have essentially shrunk an entire diagnostic laboratory down into a small, synthetic biology-based sensor that works with any face mask, and combines the high accuracy of PCR tests with the speed and low cost of antigen tests,” said co-first author Peter Nguyen, Ph.D., a Research Scientist at the Wyss Institute. “In addition to face masks, our programmable biosensors can be integrated into other garments to provide on-the-go detection of dangerous substances including viruses, bacteria, toxins, and chemical agents.”

Taking cells out of the equation

The SARS-CoV-2 biosensor is the culmination of three years of work on what the team calls their wearable freeze-dried cell-free (wFDCF) technology, which is built upon earlier iterations created in the lab of Wyss Core Faculty member and senior author Jim Collins. The technique involves extracting and freeze-drying the molecular machinery that cells use to read DNA and produce RNA and proteins. These biological elements are shelf-stable for long periods of time and activating them is simple: just add water. Synthetic genetic circuits can be added to create biosensors that can produce a detectable signal in response of the presence of a target molecule.

The researchers first applied this technology to diagnostics by integrating it into a tool to address the Zika virus outbreak in 2015. They created biosensors that can detect pathogen-derived RNA molecules and coupled them with a colored or fluorescent indicator protein, then embedded the genetic circuit into paper to create a cheap, accurate, portable diagnostic. Following their success embedding their biosensors into paper, they next set their sights on making them wearable.

These flexible, wearable biosensors can be integrated into fabric to create clothing that can detect pathogens and environmental toxins and alert the wearer via a companion smartphone app. Credit: Wyss Institute at Harvard University

“Other groups have created wearables that can sense biomolecules, but those techniques have all required putting living cells into the wearable itself, as if the user were wearing a tiny aquarium. If that aquarium ever broke, then the engineered bugs could leak out onto the wearer, and nobody likes that idea,” said Nguyen. He and his teammates started investigating whether their wFDCF technology could solve this problem, methodically testing it in more than 100 different kinds of fabrics.

Then, the COVID-19 pandemic struck.

Pivoting from wearables to face masks

“We wanted to contribute to the global effort to fight the virus, and we came up with the idea of integrating wFDCF into face masks to detect SARS-CoV-2. The entire project was done under quarantine or strict social distancing starting in May 2020. We worked hard, sometimes bringing non-biological equipment home and assembling devices manually. It was definitely different from the usual lab infrastructure we’re used to working under, but everything we did has helped us ensure that the sensors would work in real-world pandemic conditions,” said co-first author Luis Soenksen, Ph.D., a Postdoctoral Fellow at the Wyss Institute.

The team called upon every resource they had available to them at the Wyss Institute to create their COVID-19-detecting face masks, including toehold switches developed in Core Faculty member Peng Yin’s lab and SHERLOCK sensors developed in the Collins lab. The final product consists of three different freeze-dried biological reactions that are sequentially activated by the release of water from a reservoir via the single push of a button.

The first reaction cuts open the SARS-CoV-2 virus’ membrane to expose its RNA. The second reaction is an amplification step that makes numerous double-stranded copies of the Spike-coding gene from the viral RNA. The final reaction uses CRISPR-based SHERLOCK technology to detect any Spike gene fragments, and in response cut a probe molecule into two smaller pieces that are then reported via a lateral flow assay strip. Whether or not there are any Spike fragments available to cut depends on whether the patient has SARS-CoV-2 in their breath. This difference is reflected in changes in a simple pattern of lines that appears on the readout portion of the device, similar to an at-home pregnancy test.

When SARS-CoV-2 particles are present, the wFDCF system cuts a molecular bond that changes the pattern of lines that form in the readout strip, similar to an at-home pregnancy test. Credit: Wyss Institute at Harvard University

The wFDCF face mask is the first SARS-CoV-2 nucleic acid test that achieves high accuracy rates comparable to current gold standard RT-PCR tests while operating fully at room temperature, eliminating the need for heating or cooling instruments and allowing the rapid screening of patient samples outside of labs.

“This work shows that our freeze-dried, cell-free synthetic biology technology can be extended to wearables and harnessed for novel diagnostic applications, including the development of a face mask diagnostic. I am particularly proud of how our team came together during the pandemic to create deployable solutions for addressing some of the world’s testing challenges,” said Collins, Ph.D., who is also the Termeer Professor of Medical Engineering & Science at MIT.

Beyond the COVID-19 pandemic

The Wyss Institute’s wearable freeze-dried cell-free (wFDCF) technology can quickly diagnose COVID-19 from virus in patients’ breath, and can also be integrated into clothing to detect a wide variety of pathogens and other dangerous substances. Credit: Wyss Institute at Harvard University

The face mask diagnostic is in some ways the icing on the cake for the team, which had to overcome numerous challenges in order to make their technology truly wearable, including capturing droplets of a liquid substance within a flexible, unobtrusive device and preventing evaporation. The face mask diagnostic omits electronic components in favor of ease of manufacturing and low cost, but integrating more permanent elements into the system opens up a wide range of other possible applications.

In their paper, the researchers demonstrate that a network of fiber optic cables can be integrated into their wFCDF technology to detect fluorescent light generated by the biological reactions, indicating detection of the target molecule with a high level of accuracy. This digital signal can be sent to a smartphone app that allows the wearer to monitor their exposure to a vast array of substances.

“This technology could be incorporated into lab coats for scientists working with hazardous materials or pathogens, scrubs for doctors and nurses, or the uniforms of first responders and military personnel who could be exposed to dangerous pathogens or toxins, such as nerve gas,” said co-author Nina Donghia, a Staff Scientist at the Wyss Institute.

The team is actively searching for manufacturing partners who are interested in helping to enable the mass production of the face mask diagnostic for use during the COVID-19 pandemic, as well as for detecting other biological and environmental hazards.

“This team’s ingenuity and dedication to creating a useful tool to combat a deadly pandemic while working under unprecedented conditions is impressive in and of itself. But even more impressive is that these wearable biosensors can be applied to a wide variety of health threats beyond SARS-CoV-2, and we at the Wyss Institute are eager to collaborate with commercial manufacturers to realize that potential,” said Don Ingber, M.D., Ph.D., the Wyss Institute’s Founding Director. Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.

Additional authors of the paper include Nicolaas M. Angenent-Mari and Helena de Puig from the Wyss Institute and MIT; former Wyss and MIT member Ally Huang who is now at Ampylus; Rose Lee, Shimyn Slomovic, Geoffrey Lansberry, Hani Sallum, Evan Zhao, and James Niemi from the Wyss Institute; and Tommaso Galbersanini from Dreamlux.

This research was supported by the Defense Threat Reduction Agency under grant HDTRA1-14-1-0006, the Paul G. Allen Frontiers Group, the Wyss Institute for Biologically Inspired Engineering, Harvard University, Johnson & Johnson through the J&J Lab Coat of the Future QuickFire Challenge award, CONACyT grant 342369 / 408970, and MIT-692 TATA Center fellowship 2748460.

Wielding a laser beam deep inside the body

A laser projected on a white surface
The laser steering device is able to trace complex trajectories such as an exposed wire as well as a word within geometrical shapes. Credit: Wyss Institute at Harvard University

A microrobotic opto-electro-mechanical device able to steer a laser beam with high speed and a large range of motion could enhance the possibilities of minimally invasive surgeries

By Benjamin Boettner

Minimally invasive surgeries in which surgeons gain access to internal tissues through natural orifices or small external excisions are common practice in medicine. They are performed for problems as diverse as delivering stents through catheters, treating abdominal complications, and performing transnasal operations at the skull base in patients with neurological conditions.

The ends of devices for such surgeries are highly flexible (or “articulated”) to enable the visualization and specific manipulation of the surgical site in the target tissue. In the case of energy-delivering devices that allow surgeons to cut or dry (desiccate) tissues, and stop internal bleeds (coagulate) deep inside the body, a heat-generating energy source is added to the end of the device. However, presently available energy sources delivered via a fiber or electrode, such as radio frequency currents, have to be brought close to the target site, which limits surgical precision and can cause unwanted burns in adjacent tissue sections and smoke development.

Laser technology, which already is widely used in a number of external surgeries, such as those performed in the eye or skin, would be an attractive solution. For internal surgeries, the laser beam needs to be precisely steered, positioned and quickly repositioned at the distal end of an endoscope, which cannot be accomplished with the currently available relatively bulky technology.


Responding to an unmet need for a robotic surgical device that is flexible enough to access hard to reach areas of the G.I. tract while causing minimal peripheral tissue damage, Researchers at the Wyss Institute and Harvard SEAS have developed a laser steering device that has the potential to improve surgical outcomes for patients. Credit: Wyss Institute at Harvard University

Now, robotic engineers led by Wyss Associate Faculty member Robert Wood, Ph.D., and postdoctoral fellow Peter York, Ph.D., at Harvard University’s Wyss Institute for Biologically Inspired Engineering and John A. Paulson School for Engineering and Applied Science (SEAS) have developed a laser-steering microrobot in a miniaturized 6×16 millimeter package that operates with high speed and precision, and can be integrated with existing endoscopic tools. Their approach, reported in Science Robotics, could help significantly enhance the capabilities of numerous minimally invasive surgeries.

A prototype of the laser steering device creating a star trajectory
This collage shows a prototype of the laser steering device creating a star trajectory at 5000 mm/s. Credit: Wyss Institute at Harvard University

In this multi-disciplinary approach, we managed to harness our ability to rapidly prototype complex microrobotic mechanisms…provide clinicians with a non-disruptive solution that could allow them to advance the possibilities of minimally invasive surgeries in the human body with life-altering or potentially life-saving impact.

Robert Wood

“To enable minimally invasive laser surgery inside the body, we devised a microrobotic approach that allows us to precisely direct a laser beam at small target sites in complex patterns within an anatomical area of interest,” said York, the first and corresponding author on the study and a postdoctoral fellow on Wood’s microrobotics team. “With its large range of articulation, minimal footprint, and fast and precise action, this laser-steering end-effector has great potential to enhance surgical capabilities simply by being added to existing endoscopic devices in a plug-and-play fashion.”

The team needed to overcome the basic challenges in design, actuation, and microfabrication of the optical steering mechanism that enables tight control over the laser beam after it has exited from an optical fiber. These challenges, along with the need for speed and precision, were exacerbated by the size constraints – the entire mechanism had to be housed in a cylindrical structure with roughly the diameter of a drinking straw to be useful for endoscopic procedures.

“We found that for steering and re-directing the laser beam, a configuration of three small mirrors that can rapidly rotate with respect to one another in a small ‘galvanometer’ design provided a sweet spot for our miniaturization effort,” said second author Rut Peña, a mechanical engineer with micro-manufacturing expertise in Wood’s group. “To get there, we leveraged methods from our microfabrication arsenal in which modular components are laminated step-wise onto a superstructure on the millimeter scale – a highly effective fabrication process when it comes to iterating on designs quickly in search of an optimum, and delivering a robust strategy for mass-manufacturing a successful product.”

An endoscope with laser as end-effector
The microrobotic laser-steering end-effector (on the right) can be used as a fitted add-on accessory for existing endoscopic systems (on the left) for use in minimally invasive surgery. Credit: Wyss Institute at Harvard University

The team demonstrated that their laser-steering end-effector, miniaturized to a cylinder measuring merely 6 mm in diameter and 16 mm in length, was able to map out and follow complex trajectories in which multiple laser ablations could be performed with high speed, over a large range, and be repeated with high accuracy.

To further show that the device, when attached to the end of a common colonoscope, could be applied to a life-like endoscopic task, York and Peña, advised by Wyss Clinical Fellow Daniel Kent, M.D., successfully simulated the resection of polyps by navigating their device via tele-operation in a benchtop phantom tissue made of rubber. Kent also is a resident physician in general surgery at the Beth Israel Deaconess Medical Center.

“In this multi-disciplinary approach, we managed to harness our ability to rapidly prototype complex microrobotic mechanisms that we have developed over the past decade to provide clinicians with a non-disruptive solution that could allow them to advance the possibilities of minimally invasive surgeries in the human body with life-altering or potentially life-saving impact,” said senior author Wood, Ph.D., who also is the Charles River Professor of Engineering and Applied Sciences at SEAS.

Laser inside a colon
The laser steering device performing a colonoscope demo in a life-size model of the colon. Credit: Wyss Institute at Harvard University

Wood’s microrobotics team together with technology translation experts at the Wyss Institute have patented their approach and are now further de-risking their medical technology (MedTech) as an add-on for surgical endoscopes.

“The Wyss Institute’s focus on microrobotic devices and this new laser-steering device developed by Robert Wood’s team working across disciplines with clinicians and experts in translation will hopefully revolutionize how minimally invasive surgical procedures are carried out in a number of disease areas,” said Wyss Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at SEAS.

The study was funded by the National Science Foundation under award #CMMI-1830291, and the Wyss Institute for Biologically Inspired Engineering.

Robotic swarm swims like a school of fish

A Bluebot
Bluebots are fish-shaped robots that can coordinate their movements in three dimensions underwater, rather than the two dimensions previously achieved by Kilobots. Credit: Harvard SEAS

By Leah Burrows / SEAS Communications

Schools of fish exhibit complex, synchronized behaviors that help them find food, migrate, and evade predators. No one fish or sub-group of fish coordinates these movements, nor do fish communicate with each other about what to do next. Rather, these collective behaviors emerge from so-called implicit coordination — individual fish making decisions based on what they see their neighbors doing.

This type of decentralized, autonomous self-organization and coordination has long fascinated scientists, especially in the field of robotics.

Now, a team of researchers at Harvard’s Wyss Institute and John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed fish-inspired robots that can synchronize their movements like a real school of fish, without any external control. It is the first time researchers have demonstrated complex 3D collective behaviors with implicit coordination in underwater robots.

“Robots are often deployed in areas that are inaccessible or dangerous to humans, areas where human intervention might not even be possible,” said Florian Berlinger, a Ph.D. Candidate at the Wyss Institute and SEAS and first author of the paper. “In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient. By using implicit rules and 3D visual perception, we were able to create a system that has a high degree of autonomy and flexibility underwater where things like GPS and WiFi are not accessible.”

The research is published in Science Robotics.

The fish-inspired robotic swarm, dubbed Blueswarm, was created in the lab of Wyss Associate Faculty member Radhika Nagpal, Ph.D., who is also the Fred Kavli Professor of Computer Science at SEAS. Nagpal’s lab is a pioneer in self-organizing systems, from their 1,000 robot Kilobot swarm to their termite-inspired robotic construction crew.

However, most previous robotic swarms operated in two-dimensional space. Three-dimensional spaces, like air and water, pose significant challenges to sensing and locomotion.

To overcome these challenges, the researchers developed a vision-based coordination system in their fish robots based on blue LED lights. Each underwater robot, called a Bluebot, is equipped with two cameras and three LED lights. The on-board, fisheye-lens cameras detect the LEDs of neighboring Bluebots and use a custom algorithm to determine their distance, direction and heading. Based on the simple production and detection of LED light, the researchers demonstrated that the Blueswarm could exhibit complex self-organized behaviors, including aggregation, dispersion, and circle formation.

A Blueswarm robot flashing the LEDs
These fish-inspired robots can synchronize their movements without any outside control. Based on the simple production and detection of LED light, the robotic collective exhibits complex self-organized behaviors, including aggregation, dispersion, and circle formation. Credit: Harvard University’s Self-organizing Systems Research Group

“Each Bluebot implicitly reacts to its neighbors’ positions,” said Berlinger. “So, if we want the robots to aggregate, then each Bluebot will calculate the position of each of its neighbors and move towards the center. If we want the robots to disperse, the Bluebots do the opposite. If we want them to swim as a school in a circle, they are programmed to follow lights directly in front of them in a clockwise direction.”

The researchers also simulated a simple search mission with a red light in the tank. Using the dispersion algorithm, the Bluebots spread out across the tank until one comes close enough to the light source to detect it. Once the robot detects the light, its LEDs begin to flash, which triggers the aggregation algorithm in the rest of the school. From there, all the Bluebots aggregate around the signaling robot.


Blueswarm, a Harvard Wyss- and SEAS-developed underwater robot collective, uses a 3D vision-based coordination system and 3D locomotion to coordinate the movements of its individual Bluebots autonomously, mimicking the behavior of schools of fish. Credit: Harvard SEAS

“Our results with Blueswarm represent a significant milestone in the investigation of underwater self-organized collective behaviors,” said Nagpal. “Insights from this research will help us develop future miniature underwater swarms that can perform environmental monitoring and search in visually-rich but fragile environments like coral reefs. This research also paves a way to better understand fish schools, by synthetically recreating their behavior.”

The research was co-authored by Melvin Gauci, Ph.D., a former Wyss Technology Development Fellow. It was supported in part by the Office of Naval Research, the Wyss Institute for Biologically Inspired Engineering, and an Amazon AWS Research Award.

Ultra-sensitive and resilient sensor for soft robotic systems

Sensor sleeve
Graduate student Moritz Graule demonstrates a fabric arm sleeve with embedded sensors. The sensors detect the small changes in the Graule’s forearm muscle through the fabric. Such a sleeve could be used in everything from virtual reality simulations and sportswear to clinical diagnostics for neurodegenerative diseases like Parkinson’s Disease. Credit: Oluwaseun Araromi/Harvard SEAS

By Leah Burrows / SEAS communications

Newly engineered slinky-like strain sensors for textiles and soft robotic systems survive the washing machine, cars and hammers.

Think about your favorite t-shirt, the one you’ve worn a hundred times, and all the abuse you’ve put it through. You’ve washed it more times than you can remember, spilled on it, stretched it, crumbled it up, maybe even singed it leaning over the stove once. We put our clothes through a lot and if the smart textiles of the future are going to survive all that we throw at them, their components are going to need to be resilient.

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering have developed an ultra-sensitive, seriously resilient strain sensor that can be embedded in textiles and soft robotic systems. The research is published in Nature.

“Current soft strain gauges are really sensitive but also really fragile,” said Oluwaseun Araromi, Ph.D., a Research Associate in Materials Science and Mechanical Engineering at SEAS and the Wyss Institute and first author of the paper. “The problem is that we’re working in an oxymoronic paradigm — highly sensitivity sensors are usually very fragile and very strong sensors aren’t usually very sensitive. So, we needed to find mechanisms that could give us enough of each property.”

In the end, the researchers created a design that looks and behaves very much like a Slinky.

“A Slinky is a solid cylinder of rigid metal but if you pattern it into this spiral shape, it becomes stretchable,” said Araromi. “That is essentially what we did here. We started with a rigid bulk material, in this case carbon fiber, and patterned it in such a way that the material becomes stretchable.”

The pattern is known as a serpentine meander, because its sharp ups and downs resemble the slithering of a snake. The patterned conductive carbon fibers are then sandwiched between two pre-strained elastic substrates. The overall electrical conductivity of the sensor changes as the edges of the patterned carbon fiber come out of contact with each other, similar to the way the individual spirals of a slinky come out of contact with each other when you pull both ends. This process happens even with small amounts of strain, which is the key to the sensor’s high sensitivity.

Close-up of the sensor material
A close-up view of the sensor’s patterned conductive carbon fibers. The fibers are sandwiched between two prestrained elastic substrates. The overall electrical conductivity of the sensor changes as the edges of the patterned carbon fiber come out of contact with each other. Credit: James Weaver/Harvard SEAS

Unlike current highly sensitive stretchable sensors, which rely on exotic materials such as silicon or gold nanowires, this sensor doesn’t require special manufacturing techniques or even a clean room. It could be made using any conductive material.

The researchers tested the resiliency of the sensor by stabbing it with a scalpel, hitting it with a hammer, running it over with a car, and throwing it in a washing machine ten times. The sensor emerged from each test unscathed. To demonstrate its sensitivity, the researchers embedded the sensor in a fabric arm sleeve and asked a participant to make different gestures with their hand, including a fist, open palm, and pinching motion. The sensors detected the small changes in the subject’s forearm muscle through the fabric and a machine learning algorithm was able to successfully classify these gestures.

“These features of resilience and the mechanical robustness put this sensor in a whole new camp,” said Araromi.

Such a sleeve could be used in everything from virtual reality simulations and sportswear to clinical diagnostics for neurodegenerative diseases like Parkinson’s Disease. Harvard’s Office of Technology Development has filed to protect the intellectual property associated with this project.

“The combination of high sensitivity and resilience are clear benefits of this type of sensor,” said senior author Robert Wood, Ph.D., Associate Faculty member at the Wyss Institute, and the Charles River Professor of Engineering and Applied Sciences at SEAS. “But another aspect that differentiates this technology is the low cost of the constituent materials and assembly methods. This will hopefully reduce the barriers to get this technology widespread in smart textiles and beyond.”

Sensor twist
This ultra-sensitive resilient strain sensor can be embedded in textiles and soft robotic systems. Credit: Oluwaseun Araromi/Harvard SEAS

“We are currently exploring how this sensor can be integrated into apparel due to the intimate interface to the human body it provides,” says co-author and Wyss Associate Faculty member Conor Walsh, Ph.D., who also is the Paul A. Maeder Professor of Engineering and Applied Sciences at SEAS. “This will enable exciting new applications by being able to make biomechanical and physiological measurements throughout a person’s day, not possible with current approaches.”

The combination of high sensitivity and resilience are clear benefits of this type of sensor. But another aspect that differentiates this technology is the low cost of the constituent materials and assembly methods. This will hopefully reduce the barriers to get this technology widespread in smart textiles and beyond.

Robert Wood

The research was co-authored by Moritz A. Graule, Kristen L. Dorsey, Sam Castellanos, Jonathan R. Foster, Wen-Hao Hsu, Arthur E. Passy, James C. Weaver, Senior Staff Scientist at SEAS and Joost J. Vlassak, the Abbott and James Lawrence Professor of Materials Engineering at SEAS. It was funded through the university’s strategic research alliance with Tata. The 6-year, $8.4M alliance was established in 2016 to advance Harvard innovation in fields including robotics, wearable technologies, and the internet of things (IoT).

Wearable technologies to make rehab more precise

Therapist holding patient's arm, who is wearing an intelligent wereable device
A team led by Wyss Associate Faculty member Paolo Bonato, Ph.D., found in a recent study that wearable technology is suitable to accurately track motor recovery of individuals with brain injuries and thus allow clinicians to choose more effective interventions and to improve outcomes. Credit: Shutterstock/Dmytro Zinkevych

By Tim Sullivan / Spaulding Rehabilitation Hospital Communications

A group based out of the Spaulding Motion Analysis Lab at Spaulding Rehabilitation Hospital published “Enabling Precision Rehabilitation Interventions Using Wearable Sensors and Machine Learning to Track Motor Recovery” in the newest issue of Nature Digital Medicine. The aim of the study is to lay the groundwork for the design of “precision rehabilitation” interventions by using wearable technologies to track the motor recovery of individuals with brain injury.

The study found that the technology is suitable to accurately track motor recovery and thus allow clinicians to choose more effective interventions and to improve outcomes. The study was a collaborative effort under students and former students connected to the Motion Analysis Lab under faculty mentorship.

Paolo Bonato, Ph.D., Director of the Spaulding Motion Analysis Lab and senior author on the study said, “By providing clinicians precise data will enable them to design more effective interventions to improve the care we deliver. To have so many of our talented young scientists and researchers from our lab collaborate to create this meaningful paper is especially gratifying for all of our faculty who support our ongoing research enterprise.” Bonato is also an Associate Faculty member at Harvard’s Wyss Institute for Biologically Inspired Engineering.

Catherine Adans-Dester, P.T., Ph.D., a member of Dr. Bonato’s team served as lead author on the manuscript. “The need to develop patient-specific interventions is apparent when one considers that clinical studies often report satisfactory motor gains only in a portion of participants, which suggests that clinical outcomes could be improved if we had better tools to develop patient-specific interventions. Data collected using wearable sensors provides clinicians with the opportunity to do so with little burden on clinicians and patients,” said Dr. Adans-Dester. The approach proposed in the paper relied on machine learning-based algorithms to derive clinical score estimates from wearable sensor data collected during functional motor tasks. Sensor-based score estimates showed strong agreement with those generated by clinicians.

By providing clinicians precise data will enable them to design more effective interventions to improve the care we deliver

Paolo Bonato

The results of the study demonstrated that wearable sensor data can be used to derive accurate estimates of clinical scores utilized in the clinic to capture the severity of motor impairments and the quality of upper-limb movement patterns. In the study, the upper-limb Fugl-Meyer Assessment (FMA) scale was used to generate clinical scores of the severity of motor impairments, and the Functional Ability Scale (FAS) was used to generate clinical scores of the quality of movement. Wearable sensor data (i.e., accelerometer data) was collected during the performance of eight functional motor tasks taken from the Wolf-Motor Function Test, thus providing a sample of gross arm movements and fine motor control tasks. Machine learning-based algorithms were developed to derive accurate estimates of the FMA and FAS clinical scores from the sensor data. A total of 37 study participants (16 stroke survivors and 21 traumatic brain injury survivors) participated in the study.

Involved in the study in addition to Dr. Bonato and Dr. Adans-Dester were Nicolas Hankov, Anne O’Brien, Gloria Vergara-Diaz, Randie Black-Schaffer, MD, Ross Zafonte, DO, from the Harvard Medical School Department of Physical Medicine & Rehabilitation at Spaulding Rehabilitation Hospital, Boston MA, USA, Jennifer Dy Department of Electrical and Computer Engineering, Northeastern University, Boston MA, and Sunghoon I. Lee of the College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst MA.

Cutting surgical robots down to size

By Lindsay Brownell

Minimally invasive laparoscopic surgery, in which a surgeon uses tools and a tiny camera inserted into small incisions to perform operations, has made surgical procedures safer for both patients and doctors over the last half-century. Recently, surgical robots have started to appear in operating rooms to further assist surgeons by allowing them to manipulate multiple tools at once with greater precision, flexibility, and control than is possible with traditional techniques. However, these robotic systems are extremely large, often taking up an entire room, and their tools can be much larger than the delicate tissues and structures on which they operate.

The mini-RCM is controlled by three linear actuators (mini-LAs) that allow it to move in multiple dimensions and help correct hand tremors and other disturbances during teleoperation. Credit: Wyss Institute at Harvard University

A collaboration between Wyss Associate Faculty member Robert Wood, Ph.D. and Robotics Engineer Hiroyuki Suzuki of Sony Corporation has brought surgical robotics down to the microscale by creating a new, origami-inspired miniature remote center of motion manipulator (the “mini-RCM”). The robot is the size of a tennis ball, weighs about as much as a penny, and successfully performed a difficult mock surgical task, as described in a recent issue of Nature Machine Intelligence.

“The Wood lab’s unique technical capabilities for making micro-robots have led to a number of impressive inventions over the last few years, and I was convinced that it also had the potential to make a breakthrough in the field of medical manipulators as well,” said Suzuki, who began working with Wood on the mini-RCM in 2018 as part of a Harvard-Sony collaboration. “This project has been a great success.”

A mini robot for micro tasks

To create their miniature surgical robot, Suzuki and Wood turned to the Pop-Up MEMS manufacturing technique developed in Wood’s lab, in which materials are deposited on top of each other in layers that are bonded together, then laser-cut in a specific pattern that allows the desired three-dimensional shape to “pop up,” as in a children’s pop-up picture book. This technique greatly simplifies the mass-production of small, complex structures that would otherwise have to be painstakingly constructed by hand.

The team created a parallelogram shape to serve as the main structure of the robot, then fabricated three linear actuators (mini-LAs) to control the robot’s movement: one parallel to the bottom of the parallelogram that raises and lowers it, one perpendicular to the parallelogram that rotates it, and one at the tip of the parallelogram that extends and retracts the tool in use. The result was a robot that is much smaller and lighter than other microsurgical devices previously developed in academia.

The mini-LAs are themselves marvels in miniature, built around a piezoelectric ceramic material that changes shape when an electrical field is applied. The shape change pushes the mini-LA’s “runner unit” along its “rail unit” like a train on train tracks, and that linear motion is harnessed to move the robot. Because piezoelectric materials inherently deform as they change shape, the team also integrated LED-based optical sensors into the mini-LA to detect and correct any deviations from the desired movement, such as those caused by hand tremors.

Steadier than a surgeon’s hands

To mimic the conditions of a teleoperated surgery, the team connected the mini-RCM to a Phantom Omni device, which manipulated the mini-RCM in response to the movements of a user’s hand controlling a pen-like tool. Their first test evaluated a human’s ability to trace a tiny square smaller than the tip of a ballpoint pen, looking through a microscope and either tracing it by hand, or tracing it using the mini-RCM. The mini-RCM tests dramatically improved user accuracy, reducing error by 68% compared to manual operation – an especially important quality given the precision required to repair small and delicate structures in the human body.

After the mini-RCM’s success on the tracing test, the researchers then created a mock version of a surgical procedure called retinal vein cannulation, in which a surgeon must carefully insert a needle through the eye to inject therapeutics into the tiny veins at the back of the eyeball. They fabricated a silicone tube the same size as the retinal vein (about twice the thickness of a human hair), and successfully punctured it with a needle attached to the end of the mini-RCM without causing local damage or disruption.

In addition to its efficacy in performing delicate surgical maneuvers, the mini-RCM’s small size provides another important benefit: it is easy to set up and install and, in the case of a complication or electrical outage, the robot can be easily removed from a patient’s body by hand.

“The Pop-Up MEMS method is proving to be a valuable approach in a number of areas that require small yet sophisticated machines, and it was very satisfying to know that it has the potential to improve the safety and efficiency of surgeries to make them even less invasive for patients,” said Wood, who is also the Charles River Professor of Engineering and Applied Sciences at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

The researchers aim to increase the force of the robot’s actuators to cover the maximum forces experienced during an operation, and improve its positioning precision. They are also investigating using a laser with a shorter pulse during the machining process, to improve the mini-LAs’ sensing resolution.

“This unique collaboration between the Wood lab and Sony illustrates the benefits that can arise from combining the real-world focus of industry with the innovative spirit of academia, and we look forward to seeing the impact this work will have on surgical robotics in the near future,” said Wyss Institute Founding Director Don Ingber, M.D., Ph.D., who is also the the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at SEAS.

Next-generation cockroach-inspired robot is small but mighty

The newly designed HAMR-Jr alongside its predecessor, HAMR-VI. HAMR-Jr is only slightly bigger in length and width than a penny, making it one of the smallest yet highly capable, high-speed insect-scale robots. Credit: Kaushik Jayaram/Harvard SEAS

By Leah Burrows

This itsy-bitsy robot can’t climb up the waterspout yet but it can run, jump, carry heavy payloads and turn on a dime. Dubbed HAMR-JR, this microrobot developed by researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Harvard’s Wyss Institute for Biologically Inspired Engineering, is a half-scale version of the cockroach-inspired Harvard Ambulatory Microrobot or HAMR.

About the size of a penny, HAMR-JR can perform almost all of the feats of its larger-scale predecessor, making it one of the most dexterous microrobots to date.

“Most robots at this scale are pretty simple and only demonstrate basic mobility,” said Kaushik Jayaram, Ph.D., a former postdoctoral fellow at SEAS and the Wyss Institute, and first author of the paper. “We have shown that you don’t have to compromise dexterity or control for size.”

Jayaram is currently an Assistant Professor at the University of Colorado, Boulder.

The research was presented virtually at the International Conference on Robotics and Automation (ICRA 2020) this week.

One of the big questions going into this research was whether or not the pop-up manufacturing process used to build previous versions of HAMR and other microbots, including the RoboBee, could be used to build robots at multiple scales — from tiny surgical bots to large-scale industrial robots.

PC-MEMS (short for printed circuit microelectromechanical systems) is a fabrication process in which the robot’s components are etched into a 2D sheet and then popped out in its 3D structure. To build HAMR-JR, the researchers simply shrunk the 2D sheet design of the robot — along with the actuators and onboard circuitry — to recreate a smaller robot with all the same functionalities.

“The wonderful part about this exercise is that we did not have to change anything about the previous design,” said Jayaram. “We proved that this process can be applied to basically any device at a variety of sizes.”

Next-generation cockroach-inspired robot is small but mighty
HAMR Jr. can turn right, left and move forward and backward. Credit: Kaushik Jayaram/Harvard SEAS

HAMR-JR comes in at 2.25 centimeters in body length and weighs about 0.3 grams — a fraction of the weight of an actual penny. It can run about 14 body lengths per second, making it not only one of the smallest but also one of the fastest microrobots.

Scaling down does change some of the principles governing things like stride length and joint stiffness, so the researchers also developed a model that can predict locomotion metrics like running speeds, foot forces, and payload based on a target size. The model can then be used to design a system with the required specifications.

“This new robot demonstrates that we have a good grasp on the theoretical and practical aspects of scaling down complex robots using our folding-based assembly approach,” said co-author Robert Wood, Ph.D., Charles River Professor of Engineering and Applied Sciences in SEAS and Core Faculty Member of the Wyss Institute.

This research was co-authored by Jennifer Shum, Samantha Castellanos and E. Farrell Helbling, Ph.D. This research was supported by the Defense Advanced Research Projects Agency (DARPA) and the Wyss Institute.

New study uses robots to uncover the connections between the human mind and walking control

Using a robot to disrupt the gait cycle of participants, researchers discovered that feedforward mechanisms controlled by the cerebellum and feedback mechanisms controlled at the spinal level determine how the nervous system responds to robot-induced changes in step length. Credit: Wyss Institute at Harvard University

By Tim Sullivan, Spaulding Rehabilitation Network Communications

Many of us aren’t spending much time outside lately, but there are still many obstacles for us to navigate as we walk around: the edge of the coffee table, small children, the family dog. How do our brains adjust to changes in our walking strides? Researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Motion Analysis Laboratory at Spaulding Rehabilitation Hospital used robots to try to answer that question, and discovered that mechanisms in both the cerebellum and the spinal cord determine how the nervous system responds to robot-induced changes in step length. The new study is published in the latest issue of Scientific Reports, and points the way toward improving robot-based physical rehabilitation programs for patients.

New Study Uses Robots to Uncover the Connections Between the Human Mind and Walking Control
Using a robot to disrupt the gait cycle of participants, researchers discovered that feedforward mechanisms controlled by the cerebellum and feedback mechanisms controlled at the spinal level determine how the nervous system responds to robot-induced changes in step length. Credit: Wyss Institute at Harvard University

“Our understanding of the neural mechanisms underlying locomotor adaptation is still limited. Specifically, how behavioral, functional, and physiological processes work in concert to achieve adaptation during locomotion has remained elusive to date,” said Paolo Bonato, Ph.D., an Associate Faculty member of the Wyss Institute and Director of the Spaulding Motion Analysis Lab who led the study. “Our goal is to create a better understanding of this process and hence develop more effective clinical interventions.”

For the study, the team used a robot to induce two opposite unilateral mechanical perturbations to human subjects as they were walking that affected their step length over multiple gait cycles. Electrical signals recorded from muscles were collected and analyzed to determine how muscle synergies (the activation of a group of muscles to create a specific movement) change in response to perturbation. The results revealed a combination of feedforward control signals coming from the cerebellum and feedback-driven control signals arising in the spinal cord during adaptation. The relative side-specific contributions of the two processes to motor-output adjustments, however, depended on which type of perturbation was delivered. Overall, the observations provide evidence that, in humans, both descending and afferent drives project onto the same spinal interneuronal networks that encode locomotor muscle synergies.

Researchers study how our brains adjust to changes in our walking strides, gaining insights that could be used to develop better physical rehabilitation programs. Credit: Wyss Institute.

These results mirror previous observations from animal studies, strongly suggesting the presence of a defined population of spinal interneurons regulating muscle coordination that can be accessed by both cortical and afferent drives in humans. “Our team hopes to build on this work to develop new approaches to the design of robot-assisted gait rehabilitation procedures targeting specific descending- and afferent-driven responses in muscle synergies in the coming year,” said Bonato.

The Tentacle Bot

By Leah Burrows

Of all the cool things about octopuses (and there are a lot), their arms may rank among the coolest.

Two-thirds of an octopus’s neurons are in its arms, meaning each arm literally has a mind of its own. Octopus arms can untie knots, open childproof bottles, and wrap around prey of any shape or size. The hundreds of suckers that cover their arms can form strong seals even on rough surfaces underwater.

Imagine if a robot could do all that.

Researchers have developed an octopus-inspired robot can grip, move, and manipulate a wide range of objects. Credit: Elias Knubben, Zhexin Xie, August Domel, and Li Wen

Researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering and John A. Paulson School of Engineering and Applied Sciences (SEAS) and colleagues from Beihang University have developed an octopus-inspired soft robotic arm that can grip, move, and manipulate a wide range of objects. Its flexible, tapered design, complete with suction cups, gives the gripper a firm grasp on objects of all shapes, sizes and textures — from eggs to smartphones to large exercise balls.

“Most previous research on octopus-inspired robots focused either on mimicking the suction or the movement of the arm, but not both,” said co-first author August Domel, Ph.D., a Postdoctoral Scholar at Stanford University and former graduate student at the Wyss Institute and Harvard. “Our research is the first to quantify the tapering angles of the arms and the combined functions of bending and suction, which allows for a single small gripper to be used for a wide range of objects that would otherwise require the use of multiple grippers.”

The research is published in Soft Robotics.

The researchers began by studying the tapering angle of real octopus arms and quantifying which design for bending and grabbing objects would work best for a soft robot. Next, the team looked at the layout and structure of the suckers (yes, that is the scientific term) and incorporated them into the design.

“We mimicked the general structure and distribution of these suckers for our soft actuators,” said co-first author Zhexin Xie, Ph.D., a graduate student at Beihang University. “Although our design is much simpler than its biological counterpart, these vacuum-based biomimetic suckers can attach to almost any object.”

Xie is the co-inventor of the Festo Tentacle Gripper, which is the first fully integrated implementation of this technology in a commercial prototype.

The soft robot is controlled with two valves, one to apply pressure for bending the arm and one for a vacuum that engages the suckers. By changing the pressure and vacuum, the arm can attach to any object, wrap around it, carry it, and release it. Credit: Bertoldi Lab/Harvard SEAS

Researchers control the arm with two valves, one to apply pressure for bending the arm and one as a vacuum that engages the suckers. By changing the pressure and vacuum, the arm can attach to an object, wrap around it, carry it, and release it.

The researchers successfully tested the device on many different objects, including thin sheets of plastic, coffee mugs, test tubes, eggs, and even live crabs. The tapered design also allowed the arm to squeeze into confined spaces and retrieve objects.

“The results from our study not only provide new insights into the creation of next-generation soft robotic actuators for gripping a wide range of morphologically diverse objects, but also contribute to our understanding of the functional significance of arm taper angle variability across octopus species,” said Katia Bertoldi, Ph.D., an Associate Faculty member of the Wyss Institute who is also the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS, and co-senior author of the study.

This research was also co-authored by James Weaver from the Wyss Institute, Ning An and Connor Green from Harvard SEAS, Zheyuan Gong, Tianmiao Wang, and Li Wen from Beihang University, and Elias M. Knubben from Festo SE & Co. It was supported in part by the National Science Foundation under grant DMREF-1533985 and Festo Corporate’s project division.

RoboBee powered by soft muscles

The Wyss Institute’s and SEAS robotics team built different models of the soft actuator powered RoboBee. Shown here is a four-wing, two actuator, and an eight-wing, four-actuator RoboBee model the latter of which being the first soft actuator-powered flying microrobot that is capable of controlled hovering flight. Credit: Harvard Microrobotics Lab/Harvard SEAS
By Leah Burrows

The sight of a RoboBee careening towards a wall or crashing into a glass box may have once triggered panic in the researchers in the Harvard Microrobotics Laboratory at the Harvard John A. Paulson School of Engineering and Applied Science (SEAS), but no more.

Researchers at SEAS and Harvard’s Wyss Institute for Biologically Inspired Engineering have developed a resilient RoboBee powered by soft artificial muscles that can crash into walls, fall onto the floor, and collide with other RoboBees without being damaged. It is the first microrobot powered by soft actuators to achieve controlled flight.

“There has been a big push in the field of microrobotics to make mobile robots out of soft actuators because they are so resilient,” said Yufeng Chen, Ph.D., a former graduate student and postdoctoral fellow at SEAS and first author of the paper. “However, many people in the field have been skeptical that they could be used for flying robots because the power density of those actuators simply hasn’t been high enough and they are notoriously difficult to control. Our actuator has high enough power density and controllability to achieve hovering flight.”

The research is published in Nature.

To solve the problem of power density, the researchers built upon the electrically-driven soft actuators developed in the lab of David Clarke, Ph.D., the Extended Tarr Family Professor of Materials at SEAS. These soft actuators are made using dielectric elastomers, soft materials with good insulating properties that deform when an electric field is applied.

By improving the electrode conductivity, the researchers were able to operate the actuator at 500 Hertz, on par with the rigid actuators used previously in similar robots.

Another challenge when dealing with soft actuators is that the system tends to buckle and become unstable. To solve this challenge, the researchers built a lightweight airframe with a piece of vertical constraining thread to prevent the actuator from buckling.

The soft actuators can be easily assembled and replaced in these small-scale robots. To demonstrate various flight capabilities, the researchers built several different models of the soft actuator-powered RoboBee. A two-wing model could take off from the ground but had no additional control. A four-wing, two-actuator model could fly in a cluttered environment, overcoming multiple collisions in a single flight.

“One advantage of small-scale, low-mass robots is their resilience to external impacts,” said Elizabeth Farrell Helbling, Ph.D., a former graduate student at SEAS and a coauthor on the paper. “The soft actuator provides an additional benefit because it can absorb impact better than traditional actuation strategies. This would come in handy in potential applications such as flying through rubble for search and rescue missions.”

An eight-wing, four-actuator model demonstrated controlled hovering flight, the first for a soft-actuator-powered flying microrobot.

Next, the researchers aim to increase the efficiency of the soft-powered robot, which still lags far behind more traditional flying robots.

“Soft actuators with muscle-like properties and electrical activation represent a grand challenge in robotics,” says Wyss Institute Core Faculty member Robert Wood, Ph.D., who also is the Charles River Professor of Engineering and Applied Sciences in SEAS and senior author of the paper. “If we could engineer high performance artificial muscles, the sky is the limit for what robots we could build.”

Harvard’s Office of Technology Development has protected the intellectual property relating to this project and is exploring commercialization opportunities.

This paper was also co-authored by Huichan Zhao, Jie Mao, Pakpong Chirarattananon, Nak-seung, and Patrick Hyun. It supported in part by the National Science Foundation.

Complex lattices that change in response to stimuli open a range of applications in electronics, robotics, and medicine

By Leah Burrows

What would it take to transform a flat sheet into a human face? How would the sheet need to grow and shrink to form eyes that are concave into the face and a convex nose and chin that protrude?

How to encode and release complex curves in shape-shifting structures is at the center of research led by the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Harvard’s Wyss Institute of Biologically Inspired Engineering.

Over the past decade, theorists and experimentalists have found inspiration in nature as they have sought to unravel the physics, build mathematical frameworks, and develop materials and 3D and 4D-printing techniques for structures that can change shape in response to external stimuli.

However, complex multi-scale curvature has remained out of reach.

A portrait of Carl Friedrich Gauss painted by Christian Albrecht Jensen in 1840. The researchers generated a 3D surface via an artificial intelligence algorithm. The ribs in the different layers of the lattice are programmed to grow and shrink in response to a change in temperature, mapping the curves of Gauss’ face. Images courtesy of Lori Sanders/Harvard SEAS

Now, researchers have created the most complex shape-shifting structures to date — lattices composed of multiple materials that grow or shrink in response to changes in temperature. To demonstrate their technique, team printed flat lattices that shape morph into a frequency-shifting antenna or the face of pioneering mathematician Carl Friedrich Gauss in response to a change in temperature.

The research is published in the Proceedings of the National Academy of Sciences.

“Form both enables and constrains function,” said L Mahadevan, Ph.D., the de Valpine Professor of Applied Mathematics, and Professor of Physics and Organismic and Evolutionary Biology at Harvard. “Using mathematics and computation to design form, and a combination of multi-scale geometry and multi-material printing to realize it, we are now able to build shape-shifting structures with the potential for a range of functions.”

“Together, we are creating new classes of shape-shifting matter,” said Jennifer A. Lewis, Sc.D., the Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard. “Using an integrated design and fabrication approach, we can encode complex ‘instruction sets’ within these printed materials that drive their shape-morphing behavior.”

Lewis is also a Core Faculty member of the Wyss Institute.

To create complex and doubly-curved shapes — such as those found on a face — the team turned to a bilayer, multimaterial lattice design.

“The open cells of the curved lattice give it the ability to grow or shrink a lot, even if the material itself undergoes limited extension,” said co-first author Wim M. van Rees, Ph.D., who was a postdoctoral fellow at SEAS and is now an Assistant Professor at MIT.

To achieve complex curves, growing and shrinking the lattice on its own isn’t enough. You need to be able to direct the growth locally.

“That’s where the materials palette that we’ve developed comes in,” said J. William Boley, Ph.D., a former postdoctoral fellow at SEAS and co-first author of the paper. “By printing materials with different thermal expansion behavior in pre-defined configurations, we can control the growth and shrinkage of each individual rib of the lattice, which in turn gives rise to complex bending of the printed lattice both within and out of plane.” Boley is now an Assistant Professor at Boston University.

The researchers used four different materials and programmed each rib of the lattice to change shape in response to a change in temperature. Using this method, they printed a shape-shifting patch antenna, which can change resonant frequencies as it changes shape.

To showcase the ability of the method to create a complex surface with multiscale curvature, the researchers printed the face of the 19th century mathematician who laid the foundations of differential geometry: Carl Friederich Gauss. Images courtesy of Lori Sanders/Harvard SEAS

To showcase the ability of the method to create a complex surface with multiscale curvature, the researchers decided to print a human face. They chose the face of the 19th century mathematician who laid the foundations of differential geometry: Carl Friederich Gauss. The researchers began with a 2D portrait of Gauss, painted in 1840, and generated a 3D surface using an open-source artificial intelligence algorithm. They then programmed the ribs in the different layers of the lattice to grow and shrink, mapping the curves of Gauss’ face.

This inverse design approach and multimaterial 4D printing method could be extended to other stimuli-responsive materials and be used to create scalable, reversible, shape-shifting structures with unprecedented complexity.

“Application areas include, soft electronics, smart fabrics, tissue engineering, robotics and beyond,” said Boley.

“This work was enabled by recent advances in posing and solving geometric inverse problems combined with 4D-printing technologies using multiple materials. Going forward, our hope is that this multi-disciplinary approach for shaping matter will be broadly adopted,” said Mahadevan.

This research was co-authored by Charles Lissandrello, Mark Horenstein, Ryan Truby, and Arda Kotikian. It was supported by the National Science Foundation and Draper Laboratory.

Page 1 of 3
1 2 3