Page 3 of 12
1 2 3 4 5 12

Looking beyond “technology for technology’s sake”

“Learning about the social implications of the technology you’re working on is really important,” says senior Austen Roberson. Photo: Jodi Hilton

By Laura Rosado | MIT News correspondent

Austen Roberson’s favorite class at MIT is 2.S007 (Design and Manufacturing I-Autonomous Machines), in which students design, build, and program a fully autonomous robot to accomplish tasks laid out on a themed game board.

“The best thing about that class is everyone had a different idea,” says Roberson. “We all had the same game board and the same instructions given to us, but the robots that came out of people’s minds were so different.”

The game board was Mars-themed, with a model shuttle that could be lifted to score points. Roberson’s robot, nicknamed Tank Evans after a character from the movie “Surf’s Up,” employed a clever strategy to accomplish this task. Instead of spinning the gears that would raise the entire mechanism, Roberson realized a claw gripper could wrap around the outside of the shuttle and lift it manually.

“That wasn’t the intended way,” says Roberson, but his outside-of-the-box strategy ending up winning him the competition at the conclusion of the class, which was part of the New Engineering Education Transformation (NEET) program. “It was a really great class for me. I get a lot of gratification out of building something with my hands and then using my programming and problem-solving skills to make it move.”

Roberson, a senior, is majoring in aerospace engineering with a minor in computer science. As his winning robot demonstrates, he thrives at the intersection of both fields. He references the Mars Curiosity Rover as the type of project that inspires him; he even keeps a Lego model of Curiosity on his desk. 

“You really have to trust that the hardware you’ve made is up to the task, but you also have to trust your software equally as much,” says Roberson, referring to the challenges of operating a rover from millions of miles away. “Is the robot going to continue to function after we’ve put it into space? Both of those things have to come together in such a perfect way to make this stuff work.”

Outside of formal classwork, Roberson has pursued multiple research opportunities at MIT that blend his academic interests. He’s worked on satellite situational awareness with the Space Systems Laboratory, tested drone flight in different environments with the Aerospace Controls Laboratory, and is currently working on zero-shot machine learning for anomaly detection in big datasets with the Mechatronics Research Laboratory.

“Whether that be space exploration or something else, all I can hope for is that I’m making an impact, and that I’m making a difference in people’s lives,” says Roberson. Photo: Jodi Hilton

Even while tackling these challenging technical problems head-on, Roberson is also actively thinking about the social impact of his work. He takes classes in the Program on Science, Technology, and Society, which has taught him not only how societal change throughout history has been driven by technological advancements, but also how to be a thoughtful engineer in his own career.

“Learning about the social implications of the technology you’re working on is really important,” says Roberson, acknowledging that his work in automation and machine learning needs to address these questions. “Sometimes, we get caught up in technology for technology’s sake. How can we take these same concepts and bring them to people to help in a tangible, physical way? How have we come together as a scientific community to really affect social change, and what can we do in the future to continue affecting that social change?”

Roberson is already working through what these questions mean for him personally. He’s been a member of the National Society of Black Engineers (NSBE) throughout his entire college experience, which includes serving on the executive board for two years. He’s helped to organize workshops focused on everything from interview preparation to financial literacy, as well as social events to build community among members.

“The mission of the organization is to increase the number of culturally responsible Black engineers that excel academically, succeed professionally, and positively impact the community,” says Roberson. “My goal with NSBE was to be able to provide a resource to help everybody get to where they wanted to be, to be the vehicle to really push people to be their best, and to provide the resources that people needed and wanted to advance themselves professionally.”

In fact, one of his most memorable MIT experiences is the first conference he attended as a member of NSBE.

“Being able to see all different these people from all of these different schools able to come together as a family and just talk to each other, it’s a very rewarding experience,” Roberson says. “It’s important to be able to surround yourself with people who have similar professional goals and share similar backgrounds and experiences with you. It’s definitely the proudest I’ve been of any club at MIT.”

Looking toward his own career, Roberson wants to find a way to work on fast-paced, cutting-edge technologies that move society forward in a positive way.

“Whether that be space exploration or something else, all I can hope for is that I’m making an impact, and that I’m making a difference in people’s lives,” says Roberson. “I think learning about space is learning about ourselves as well. The more you can learn about the stuff that’s out there, you can take those lessons to reflect on what’s down here as well.”

Study: Automation drives income inequality

A newly published paper quantifies the extent to which automation has contributed to income inequality in the U.S., simply by replacing workers with technology — whether self-checkout machines, call-center systems, assembly-line technology, or other devices. Image: Jose-Luis Olivares, MIT

By Peter Dizikes

When you use self-checkout machines in supermarkets and drugstores, you are probably not — with all due respect — doing a better job of bagging your purchases than checkout clerks once did. Automation just makes bagging less expensive for large retail chains.

“If you introduce self-checkout kiosks, it’s not going to change productivity all that much,” says MIT economist Daron Acemoglu. However, in terms of lost wages for employees, he adds, “It’s going to have fairly large distributional effects, especially for low-skill service workers. It’s a labor-shifting device, rather than a productivity-increasing device.”

A newly published study co-authored by Acemoglu quantifies the extent to which automation has contributed to income inequality in the U.S., simply by replacing workers with technology — whether self-checkout machines, call-center systems, assembly-line technology, or other devices. Over the last four decades, the income gap between more- and less-educated workers has grown significantly; the study finds that automation accounts for more than half of that increase.

“This single one variable … explains 50 to 70 percent of the changes or variation between group inequality from 1980 to about 2016,” Acemoglu says.

The paper, “Tasks, Automation, and the Rise in U.S. Wage Inequality,” is being published in Econometrica. The authors are Acemoglu, who is an Institute Professor at MIT, and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.

So much “so-so automation”

Since 1980 in the U.S., inflation-adjusted incomes of those with college and postgraduate degrees have risen substantially, while inflation-adjusted earnings of men without high school degrees has dropped by 15 percent.

How much of this change is due to automation? Growing income inequality could also stem from, among other things, the declining prevalence of labor unions, market concentration begetting a lack of competition for labor, or other types of technological change.

To conduct the study, Acemoglu and Restrepo used U.S. Bureau of Economic Analysis statistics on the extent to which human labor was used in 49 industries from 1987 to 2016, as well as data on machinery and software adopted in that time. The scholars also used data they had previously compiled about the adoption of robots in the U.S. from 1993 to 2014. In previous studies, Acemoglu and Restrepo have found that robots have by themselves replaced a substantial number of workers in the U.S., helped some firms dominate their industries, and contributed to inequality.

At the same time, the scholars used U.S. Census Bureau metrics, including its American Community Survey data, to track worker outcomes during this time for roughly 500 demographic subgroups, broken out by gender, education, age, race and ethnicity, and immigration status, while looking at employment, inflation-adjusted hourly wages, and more, from 1980 to 2016. By examining the links between changes in business practices alongside changes in labor market outcomes, the study can estimate what impact automation has had on workers.

Ultimately, Acemoglu and Restrepo conclude that the effects have been profound. Since 1980, for instance, they estimate that automation has reduced the wages of men without a high school degree by 8.8 percent and women without a high school degree by 2.3 percent, adjusted for inflation. 

A central conceptual point, Acemoglu says, is that automation should be regarded differently from other forms of innovation, with its own distinct effects in workplaces, and not just lumped in as part of a broader trend toward the implementation of technology in everyday life generally.

Consider again those self-checkout kiosks. Acemoglu calls these types of tools “so-so technology,” or “so-so automation,” because of the tradeoffs they contain: Such innovations are good for the corporate bottom line, bad for service-industry employees, and not hugely important in terms of overall productivity gains, the real marker of an innovation that may improve our overall quality of life.

“Technological change that creates or increases industry productivity, or productivity of one type of labor, creates [those] large productivity gains but does not have huge distributional effects,” Acemoglu says. “In contrast, automation creates very large distributional effects and may not have big productivity effects.”

A new perspective on the big picture

The results occupy a distinctive place in the literature on automation and jobs. Some popular accounts of technology have forecast a near-total wipeout of jobs in the future. Alternately, many scholars have developed a more nuanced picture, in which technology disproportionately benefits highly educated workers but also produces significant complementarities between high-tech tools and labor.

The current study differs at least by degree with this latter picture, presenting a more stark outlook in which automation reduces earnings power for workers and potentially reduces the extent to which policy solutions — more bargaining power for workers, less market concentration — could mitigate the detrimental effects of automation upon wages.

“These are controversial findings in the sense that they imply a much bigger effect for automation than anyone else has thought, and they also imply less explanatory power for other [factors],” Acemoglu says.

Still, he adds, in the effort to identify drivers of income inequality, the study “does not obviate other nontechnological theories completely. Moreover, the pace of automation is often influenced by various institutional factors, including labor’s bargaining power.”

Labor economists say the study is an important addition to the literature on automation, work, and inequality, and should be reckoned with in future discussions of these issues.

“Acemoglu and Restrepo’s paper proposes an elegant new theoretical framework for understanding the potentially complex effects of technical change on the aggregate structure of wages,” says Patrick Kline, a professor of economics at the University of California, Berkeley. “Their empirical finding that automation has been the dominant factor driving U.S. wage dispersion since 1980 is intriguing and seems certain to reignite debate over the relative roles of technical change and labor market institutions in generating wage inequality.”

For their part, in the paper Acemoglu and Restrepo identify multiple directions for future research. That includes investigating the reaction over time by both business and labor to the increase in automation; the quantitative effects of technologies that do create jobs; and the industry competition between firms that quickly adopted automation and those that did not.

The research was supported in part by Google, the Hewlett Foundation, Microsoft, the National Science Foundation, Schmidt Sciences, the Sloan Foundation, and the Smith Richardson Foundation.

Flocks of assembler robots show potential for making larger structures

Researchers at MIT have made significant steps toward creating robots that could practically and economically assemble nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots. The new system involves large, usable structures built from an array of tiny identical subunits called voxels (the volumetric equivalent of a 2-D pixel). Courtesy of the researchers.

By David L. Chandler

Researchers at MIT have made significant steps toward creating robots that could practically and economically assemble nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.

The new work, from MIT’s Center for Bits and Atoms (CBA), builds on years of research, including recent studies demonstrating that objects such as a deformable airplane wing and a functional racing car could be assembled from tiny identical lightweight pieces — and that robotic devices could be built to carry out some of this assembly work. Now, the team has shown that both the assembler bots and the components of the structure being built can all be made of the same subunits, and the robots can move independently in large numbers to accomplish large-scale assemblies quickly.

The new work is reported in the journal Nature Communications Engineering, in a paper by CBA doctoral student Amira Abdel-Rahman, Professor and CBA Director Neil Gershenfeld, and three others.

A fully autonomous self-replicating robot assembly system capable of both assembling larger structures, including larger robots, and planning the best construction sequence is still years away, Gershenfeld says. But the new work makes important strides toward that goal, including working out the complex tasks of when to build more robots and how big to make them, as well as how to organize swarms of bots of different sizes to build a structure efficiently without crashing into each other.

As in previous experiments, the new system involves large, usable structures built from an array of tiny identical subunits called voxels (the volumetric equivalent of a 2-D pixel). But while earlier voxels were purely mechanical structural pieces, the team has now developed complex voxels that each can carry both power and data from one unit to the next. This could enable the building of structures that can not only bear loads but also carry out work, such as lifting, moving and manipulating materials — including the voxels themselves.

“When we’re building these structures, you have to build in intelligence,” Gershenfeld says. While earlier versions of assembler bots were connected by bundles of wires to their power source and control systems, “what emerged was the idea of structural electronics — of making voxels that transmit power and data as well as force.” Looking at the new system in operation, he points out, “There’s no wires. There’s just the structure.”

The robots themselves consist of a string of several voxels joined end-to-end. These can grab another voxel using attachment points on one end, then move inchworm-like to the desired position, where the voxel can be attached to the growing structure and released there.

Gershenfeld explains that while the earlier system demonstrated by members of his group could in principle build arbitrarily large structures, as the size of those structures reached a certain point in relation to the size of the assembler robot, the process would become increasingly inefficient because of the ever-longer paths each bot would have to travel to bring each piece to its destination. At that point, with the new system, the bots could decide it was time to build a larger version of themselves that could reach longer distances and reduce the travel time. An even bigger structure might require yet another such step, with the new larger robots creating yet larger ones, while parts of a structure that include lots of fine detail may require more of the smallest robots.

Credit: Amira Abdel-Rahman/MIT Center for Bits and Atoms

As these robotic devices work on assembling something, Abdel-Rahman says, they face choices at every step along the way: “It could build a structure, or it could build another robot of the same size, or it could build a bigger robot.” Part of the work the researchers have been focusing on is creating the algorithms for such decision-making.

“For example, if you want to build a cone or a half-sphere,” she says, “how do you start the path planning, and how do you divide this shape” into different areas that different bots can work on? The software they developed allows someone to input a shape and get an output that shows where to place the first block, and each one after that, based on the distances that need to be traversed.

There are thousands of papers published on route-planning for robots, Gershenfeld says. “But the step after that, of the robot having to make the decision to build another robot or a different kind of robot — that’s new. There’s really nothing prior on that.”

While the experimental system can carry out the assembly and includes the power and data links, in the current versions the connectors between the tiny subunits are not strong enough to bear the necessary loads. The team, including graduate student Miana Smith, is now focusing on developing stronger connectors. “These robots can walk and can place parts,” Gershenfeld says, “but we are almost — but not quite — at the point where one of these robots makes another one and it walks away. And that’s down to fine-tuning of things, like the force of actuators and the strength of joints. … But it’s far enough along that these are the parts that will lead to it.”

Ultimately, such systems might be used to construct a wide variety of large, high-value structures. For example, currently the way airplanes are built involves huge factories with gantries much larger than the components they build, and then “when you make a jumbo jet, you need jumbo jets to carry the parts of the jumbo jet to make it,” Gershenfeld says. With a system like this built up from tiny components assembled by tiny robots, “The final assembly of the airplane is the only assembly.”

Similarly, in producing a new car, “you can spend a year on tooling” before the first car gets actually built, he says. The new system would bypass that whole process. Such potential efficiencies are why Gershenfeld and his students have been working closely with car companies, aviation companies, and NASA. But even the relatively low-tech building construction industry could potentially also benefit.

While there has been increasing interest in 3-D-printed houses, today those require printing machinery as large or larger than the house being built. Again, the potential for such structures to instead be assembled by swarms of tiny robots could provide benefits. And the Defense Advanced Research Projects Agency is also interested in the work for the possibility of building structures for coastal protection against erosion and sea level rise.

The new study shows that both the assembler bots and the components of the structure being built can all be made of the same subunits, and the robots can move independently in large numbers to accomplish large-scale assemblies quickly. Courtesy of the researchers.

Aaron Becker, an associate professor of electrical and computer engineering at the University of Houston, who was not associated with this research, calls this paper “a home run — [offering] an innovative hardware system, a new way to think about scaling a swarm, and rigorous algorithms.”

Becker adds: “This paper examines a critical area of reconfigurable systems: how to quickly scale up a robotic workforce and use it to efficiently assemble materials into a desired structure. … This is the first work I’ve seen that attacks the problem from a radically new perspective — using a raw set of robot parts to build a suite of robots whose sizes are optimized to build the desired structure (and other robots) as fast as possible.”

The research team also included MIT-CBA student Benjamin Jenett and Christopher Cameron, who is now at the U.S. Army Research Laboratory. The work was supported by NASA, the U.S. Army Research Laboratory, and CBA consortia funding.

Magnetic sensors track muscle length

A small, bead-like magnet used in a new approach to measuring muscle position. Image: Courtesy of the researchers

By Anne Trafton | MIT News Office

Using a simple set of magnets, MIT researchers have come up with a sophisticated way to monitor muscle movements, which they hope will make it easier for people with amputations to control their prosthetic limbs.

In a new pair of papers, the researchers demonstrated the accuracy and safety of their magnet-based system, which can track the length of muscles during movement. The studies, performed in animals, offer hope that this strategy could be used to help people with prosthetic devices control them in a way that more closely mimics natural limb movement.

“These recent results demonstrate that this tool can be used outside the lab to track muscle movement during natural activity, and they also suggest that the magnetic implants are stable and biocompatible and that they don’t cause discomfort,” says Cameron Taylor, an MIT research scientist and co-lead author of both papers.

In one of the studies, the researchers showed that they could accurately measure the lengths of turkeys’ calf muscles as the birds ran, jumped, and performed other natural movements. In the other study, they showed that the small magnetic beads used for the measurements do not cause inflammation or other adverse effects when implanted in muscle.

“I am very excited for the clinical potential of this new technology to improve the control and efficacy of bionic limbs for persons with limb-loss,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, and an associate member of MIT’s McGovern Institute for Brain Research.

Herr is a senior author of both papers, which appear in the journal Frontiers in Bioengineering and Biotechnology. Thomas Roberts, a professor of ecology, evolution, and organismal biology at Brown University, is a senior author of the measurement study.

Tracking movement

Currently, powered prosthetic limbs are usually controlled using an approach known as surface electromyography (EMG). Electrodes attached to the surface of the skin or surgically implanted in the residual muscle of the amputated limb measure electrical signals from a person’s muscles, which are fed into the prosthesis to help it move the way the person wearing the limb intends.

However, that approach does not take into account any information about the muscle length or velocity, which could help to make the prosthetic movements more accurate.

Several years ago, the MIT team began working on a novel way to perform those kinds of muscle measurements, using an approach that they call magnetomicrometry. This strategy takes advantage of the permanent magnetic fields surrounding small beads implanted in a muscle. Using a credit-card-sized, compass-like sensor attached to the outside of the body, their system can track the distances between the two magnets. When a muscle contracts, the magnets move closer together, and when it flexes, they move further apart.

The new muscle measuring approach takes advantage of the magnetic attraction between two small beads implanted in a muscle. Using a small sensor attached to the outside of the body, the system can track the distances between the two magnets as the muscle contracts and flexes. Image: Courtesy of the researchers

In a study published last year, the researchers showed that this system could be used to accurately measure small ankle movements when the beads were implanted in the calf muscles of turkeys. In one of the new studies, the researchers set out to see if the system could make accurate measurements during more natural movements in a nonlaboratory setting.

To do that, they created an obstacle course of ramps for the turkeys to climb and boxes for them to jump on and off of. The researchers used their magnetic sensor to track muscle movements during these activities, and found that the system could calculate muscle lengths in less than a millisecond.

They also compared their data to measurements taken using a more traditional approach known as fluoromicrometry, a type of X-ray technology that requires much larger equipment than magnetomicrometry. The magnetomicrometry measurements varied from those generated by fluoromicrometry by less than a millimeter, on average.

“We’re able to provide the muscle-length tracking functionality of the room-sized X-ray equipment using a much smaller, portable package, and we’re able to collect the data continuously instead of being limited to the 10-second bursts that fluoromicrometry is limited to,” Taylor says.

Seong Ho Yeon, an MIT graduate student, is also a co-lead author of the measurement study. Other authors include MIT Research Support Associate Ellen Clarrissimeaux and former Brown University postdoc Mary Kate O’Donnell.

Biocompatibility

In the second paper, the researchers focused on the biocompatibility of the implants. They found that the magnets did not generate tissue scarring, inflammation, or other harmful effects. They also showed that the implanted magnets did not alter the turkeys’ gaits, suggesting they did not produce discomfort. William Clark, a postdoc at Brown, is the co-lead author of the biocompatibility study.

The researchers also showed that the implants remained stable for eight months, the length of the study, and did not migrate toward each other, as long as they were implanted at least 3 centimeters apart. The researchers envision that the beads, which consist of a magnetic core coated with gold and a polymer called Parylene, could remain in tissue indefinitely once implanted.

“Magnets don’t require an external power source, and after implanting them into the muscle, they can maintain the full strength of their magnetic field throughout the lifetime of the patient,” Taylor says.

The researchers are now planning to seek FDA approval to test the system in people with prosthetic limbs. They hope to use the sensor to control prostheses similar to the way surface EMG is used now: Measurements regarding the length of muscles will be fed into the control system of a prosthesis to help guide it to the position that the wearer intends.

“The place where this technology fills a need is in communicating those muscle lengths and velocities to a wearable robot, so that the robot can perform in a way that works in tandem with the human,” Taylor says. “We hope that magnetomicrometry will enable a person to control a wearable robot with the same comfort level and the same ease as someone would control their own limb.”

In addition to prosthetic limbs, those wearable robots could include robotic exoskeletons, which are worn outside the body to help people move their legs or arms more easily.

The research was funded by the Salah Foundation, the K. Lisa Yang Center for Bionics at MIT, the MIT Media Lab Consortia, the National Institutes of Health, and the National Science Foundation.

Reprogrammable materials selectively self-assemble

With just a random disturbance that energizes the cubes, they selectively self-assemble into a larger block. Photos courtesy of MIT CSAIL.

By Rachel Gordon | MIT CSAIL

While automated manufacturing is ubiquitous today, it was once a nascent field birthed by inventors such as Oliver Evans, who is credited with creating the first fully automated industrial process, in flour mill he built and gradually automated in the late 1700s. The processes for creating automated structures or machines are still very top-down, requiring humans, factories, or robots to do the assembling and making. 

However, the way nature does assembly is ubiquitously bottom-up; animals and plants are self-assembled at a cellular level, relying on proteins to self-fold into target geometries that encode all the different functions that keep us ticking. For a more bio-inspired, bottom-up approach to assembly, then, human-architected materials need to do better on their own. Making them scalable, selective, and reprogrammable in a way that could mimic nature’s versatility means some teething problems, though. 

Now, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have attempted to get over these growing pains with a new method: introducing magnetically reprogrammable materials that they coat different parts with — like robotic cubes — to let them self-assemble. Key to their process is a way to make these magnetic programs highly selective about what they connect with, enabling robust self-assembly into specific shapes and chosen configurations. 

The soft magnetic material coating the researchers used, sourced from inexpensive refrigerator magnets, endows each of the cubes they built with a magnetic signature on each of its faces. The signatures ensure that each face is selectively attractive to only one other face from all the other cubes, in both translation and rotation. All of the cubes — which run for about 23 cents — can be magnetically programmed at a very fine resolution. Once they’re tossed into a water tank (they used eight cubes for a demo), with a totally random disturbance — you could even just shake them in a box — they’ll bump into each other. If they meet the wrong mate, they’ll drop off, but if they find their suitable mate, they’ll attach. 

An analogy would be to think of a set of furniture parts that you need to assemble into a chair. Traditionally, you’d need a set of instructions to manually assemble parts into a chair (a top-down approach), but using the researchers’ method, these same parts, once programmed magnetically, would self-assemble into the chair using just a random disturbance that makes them collide. Without the signatures they generate, however, the chair would assemble with its legs in the wrong places.

“This work is a step forward in terms of the resolution, cost, and efficacy with which we can self-assemble particular structures,” says Martin Nisser, a PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS), an affiliate of CSAIL, and the lead author on a new paper about the system. “Prior work in self-assembly has typically required individual parts to be geometrically dissimilar, just like puzzle pieces, which requires individual fabrication of all the parts. Using magnetic programs, however, we can bulk-manufacture homogeneous parts and program them to acquire specific target structures, and importantly, reprogram them to acquire new shapes later on without having to refabricate the parts anew.” 

Using the team’s magnetic plotting machine, one can stick a cube back in the plotter and reprogram it. Every time the plotter touches the material, it creates either a “north”- or “south”-oriented magnetic pixel on the cube’s soft magnetic coating, letting the cubes be repurposed to assemble new target shapes when required. Before plotting, a search algorithm checks each signature for mutual compatibility with all previously programmed signatures to ensure they are selective enough for successful self-assembly.

With self-assembly, you can go the passive or active route. With active assembly, robotic parts modulate their behavior online to locate, position, and bond to their neighbors, and each module needs to be embedded with hardware for the computation, sensing, and actuation required to self-assemble themselves. What’s more, a human or computer is needed in the loop to actively control the actuators embedded in each part to make it move. While active assembly has been successful in reconfiguring a variety of robotic systems, the cost and complexity of the electronics and actuators have been a significant barrier to scaling self-assembling hardware up in numbers and down in size. 

With passive methods like these researchers’, there’s no need for embedded actuation and control.

Once programmed and set free under a random disturbance that gives them the energy to collide with one another, they’re on their own to shapeshift, without any guiding intelligence.  

If you want a structure built from hundreds or thousands of parts, like a ladder or bridge, for example, you wouldn’t want to manufacture a million uniquely different parts, or to have to re-manufacture them when you need a second structure assembled.

The trick the team used toward this goal lies in the mathematical description of the magnetic signatures, which describes each signature as a 2D matrix of pixels. These matrices ensure that any magnetically programmed parts that shouldn’t connect will interact to produce just as many pixels in attraction as those in repulsion, letting them remain agnostic to all non-mating parts in both translation and rotation. 

While the system is currently good enough to do self-assembly using a handful of cubes, the team wants to further develop the mathematical descriptions of the signatures. In particular, they want to leverage design heuristics that would enable assembly with very large numbers of cubes, while avoiding computationally expensive search algorithms. 

“Self-assembly processes are ubiquitous in nature, leading to the incredibly complex and beautiful life we see all around us,” says Hod Lipson, the James and Sally Scapa Professor of Innovation at Columbia University, who was not involved in the paper. “But the underpinnings of self-assembly have baffled engineers: How do two proteins destined to join find each other in a soup of billions of other proteins? Lacking the answer, we have been able to self-assemble only relatively simple structures so far, and resort to top-down manufacturing for the rest. This paper goes a long way to answer this question, proposing a new way in which self-assembling building blocks can find each other. Hopefully, this will allow us to begin climbing the ladder of self-assembled complexity.”

Nisser wrote the paper alongside recent EECS graduates Yashaswini Makaram ’21 and Faraz Faruqi SM ’22, both of whom are former CSAIL affiliates; Ryo Suzuki, assistant professor of computer science at the University of Calgary; and MIT associate professor of EECS Stefanie Mueller, who is a CSAIL affiliate. They will present their research at the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022).

Tiny particles work together to do big things

MIT chemical engineers have shown that specialized particles can oscillate together, demonstrating a phenomenon known as emergent behavior. Image: Courtesy of the researchers

By Anne Trafton | MIT News Office

Taking advantage of a phenomenon known as emergent behavior in the microscale, MIT engineers have designed simple microparticles that can collectively generate complex behavior, much the same way that a colony of ants can dig tunnels or collect food.

Working together, the microparticles can generate a beating clock that oscillates at a very low frequency. These oscillations can then be harnessed to power tiny robotic devices, the researchers showed.

“In addition to being interesting from a physics point of view, this behavior can also be translated into an on-board oscillatory electrical signal, which can be very powerful in microrobotic autonomy. There are a lot of electrical components that require such an oscillatory input,” says Jingfan Yang, a recent MIT PhD recipient and one of the lead authors of the new study.

The particles used to create the new oscillator perform a simple chemical reaction that allows the particles to interact with each other through the formation and bursting of tiny gas bubbles. Under the right conditions, these interactions create an oscillator that behaves similar to a ticking clock, beating at intervals of a few seconds.

“We’re trying to look for very simple rules or features that you can encode into relatively simple microrobotic machines, to get them to collectively do very sophisticated tasks,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT.

Strano is the senior author of the new paper, which appears in Nature Communications. Along with Yang, Thomas Berrueta, a Northwestern University graduate student advised by Professor Todd Murphey, is a lead author of the study.

Collective behavior

Demonstrations of emergent behavior can be seen throughout the natural world, where colonies of insects such as ants and bees accomplish feats that a single member of the group would never be able to achieve.

“Ants have minuscule brains and they do very simple cognitive tasks, but collectively they can do amazing things. They can forage for food and build these elaborate tunnel structures,” Strano says. “Physicists and engineers like myself want to understand these rules because it means we can make tiny things that collectively do complex tasks.”

In this study, the researchers wanted to design particles that could generate rhythmic movements, or oscillations, with a very low frequency. Until now, building low-frequency micro-oscillators has required sophisticated electronics that are expensive and difficult to design, or specialized materials with complex chemistries.

The simple particles that the researchers designed for this study are discs as small as 100 microns in diameter. The discs, made from a polymer called SU-8, have a platinum patch that can catalyze the breakdown of hydrogen peroxide into water and oxygen.

When the particles are placed at the surface of a droplet of hydrogen peroxide on a flat surface, they tend to travel to the top of the droplet. At this liquid-air interface, they interact with any other particles found there. Each particle produces its own tiny bubble of oxygen, and when two particles come close enough that their bubbles interact, the bubbles pop, propelling the particles away from each other. Then, they begin forming new bubbles, and the cycle repeats over and over.

“One particle by itself stays still and doesn’t do anything interesting, but through teamwork, they can do something pretty amazing and useful, which is actually a difficult thing to achieve at the microscale,” Yang says.

MIT chemical engineers showed that specialized particles can oscillate together, demonstrating a phenomenon known as emergent behavior. At left, two particles oscillate together, and at right, eight particles. Video courtesy of the researchers.

The researchers found that two particles could make a very reliable oscillator, but as more particles were added, the rhythm would get thrown off. However, if they added one particle that was slightly different from the others, that particle could act as a “leader” that reorganized the other particles back into a rhythmic oscillator.

This leader particle is the same size as the other particles but has a slightly larger platinum patch, which enables it to create a larger oxygen bubble. This allows this particle to move to the center of the group, where it coordinates the oscillations of all of the other particles. Using this approach, the researchers found they could create oscillators containing up to at least 11 particles.

Depending on the number of particles, this oscillator beats at a frequency of about 0.1 to 0.3 hertz, which is on the order of the low-frequency oscillators that govern biological functions such as walking and the beating of the heart.

Oscillating current

The researchers also showed that they could use the rhythmic beating of these particles to generate an oscillating electric current. To do that, they swapped out the platinum catalyst for a fuel cell made of platinum and ruthenium or gold. The mechanical oscillation of the particles rhythmically alters the resistance from one end of the fuel cell to the other, which converts the voltage generated by the fuel cell to an oscillating current.

“Like a dripping faucet, catalytic microdiscs floating at a liquid interface use a chemical reaction to drive the periodic growth and release of gas bubbles. The study shows how these oscillatory dynamics can be harnessed for mechanical actuation and electrochemical signaling relevant to microrobotics,” says Kyle Bishop, a professor of chemical engineering at Columbia University, who was not involved in the study.

Generating an oscillating current instead of a constant one could be useful for applications such as powering tiny robots that can walk. The MIT researchers used this approach to show that they could power a microactuator, which was previously used as legs on a tiny walking robot developed by researchers at Cornell University. The original version was powered by a laser that had to be alternately pointed at each set of legs, to manually oscillate the current. The MIT team showed that the on-board oscillating current generated by their particles could drive the cyclic actuation of the microrobotic leg, using a wire to transfer the current from the particles to the actuator.

“It shows that this mechanical oscillation can become an electrical oscillation, and then that electrical oscillation can actually power activities that a robot would do,” Strano says.

One possible application for this kind of system would be to control swarms of tiny autonomous robots that could be used as sensors to monitor water pollution.

The research was funded in part by the U.S. Army Research Office, the U.S. Department of Energy, and the National Science Foundation.

Breaking through the mucus barrier

A new drug capsule developed at MIT can help large proteins such as insulin and small-molecule drugs be absorbed in the digestive tract. Image: Felice Frankel

By Anne Trafton | MIT News Office

One reason that it’s so difficult to deliver large protein drugs orally is that these drugs can’t pass through the mucus barrier that lines the digestive tract. This means that insulin and most other “biologic drugs” — drugs consisting of proteins or nucleic acids — have to be injected or administered in a hospital. 

A new drug capsule developed at MIT may one day be able to replace those injections. The capsule has a robotic cap that spins and tunnels through the mucus barrier when it reaches the small intestine, allowing drugs carried by the capsule to pass into cells lining the intestine.

“By displacing the mucus, we can maximize the dispersion of the drug within a local area and enhance the absorption of both small molecules and macromolecules,” says Giovanni Traverso, the Karl van Tassel Career Development Assistant Professor of Mechanical Engineering at MIT and a gastroenterologist at Brigham and Women’s Hospital.

In a study appearing today in Science Robotics, the researchers demonstrated that they could use this approach to deliver insulin as well as vancomycin, an antibiotic peptide that currently has to be injected.

Shriya Srinivasan, a research affiliate at MIT’s Koch Institute for Integrative Cancer Research and a junior fellow at the Society of Fellows at Harvard University, is the lead author of the study.

Tunneling through

For several years, Traverso’s lab has been developing strategies to deliver protein drugs such as insulin orally. This is a difficult task because protein drugs tend to be broken down in acidic environment of the digestive tract, and they also have difficulty penetrating the mucus barrier that lines the tract.

To overcome those obstacles, Srinivasan came up with the idea of creating a protective capsule that includes a mechanism that can tunnel through mucus, just as tunnel boring machines drill into soil and rock.

“I thought that if we could tunnel through the mucus, then we could deposit the drug directly on the epithelium,” she says. “The idea is that you would ingest this capsule and the outer layer would dissolve in the digestive tract, exposing all these features that start to churn through the mucus and clear it.”

The “RoboCap” capsule, which is about the size of a multivitamin, carries its drug payload in a small reservoir at one end and carries the tunnelling features in its main body and surface. The capsule is coated with gelatin that can be tuned to dissolve at a specific pH.

When the coating dissolves, the change in pH triggers a tiny motor inside the RoboCap capsule to start spinning. This motion helps the capsule to tunnel into the mucus and displace it. The capsule is also coated with small studs that brush mucus away, similar to the action of a toothbrush.

The spinning motion also helps to erode the compartment that carries the drug, which is gradually released into the digestive tract.

“What the RoboCap does is transiently displace the initial mucus barrier and then enhance absorption by maximizing the dispersion of the drug locally,” Traverso says. “By combining all of these elements, we’re really maximizing our capacity to provide the optimal situation for the drug to be absorbed.”

Enhanced delivery

In tests in animals, the researchers used this capsule to deliver either insulin or vancomycin, a large peptide antibiotic that is used to treat a broad range of infections, including skin infections as well as infections affecting orthopedic implants. With the capsule, the researchers found that they could deliver 20 to 40 times more drug than a similar capsule without the tunneling mechanism.

Once the drug is released from the capsule, the capsule itself passes through the digestive tract on its own. The researchers found no sign of inflammation or irritation in the digestive tract after the capsule passed through, and they also observed that the mucus layer reforms within a few hours after being displaced by the capsule.

Another approach that some researchers have used to enhance oral delivery of drugs is to give them along with additional drugs that help them cross through the intestinal tissue. However, these enhancers often only work with certain drugs. Because the MIT team’s new approach relies solely on mechanical disruptions to the mucus barrier, it could potentially be applied to a broader set of drugs, Traverso says.

“Some of the chemical enhancers preferentially work with certain drug molecules,” he says. “Using mechanical methods of administration can potentially enable more drugs to have enhanced absorption.”

While the capsule used in this study released its payload in the small intestine, it could also be used to target the stomach or colon by changing the pH at which the gelatin coating dissolves. The researchers also plan to explore the possibility of delivering other protein drugs such as GLP1 receptor agonist, which is sometimes used to treat type 2 diabetes. The capsules could also be used to deliver topical drugs to treat ulcerative colitis and other inflammatory conditions by maximizing the local concentration of the drugs in the tissue to help treat the inflammation.

The research was funded, in part, by the National Institutes of Health and MIT’s Department of Mechanical Engineering.

Other authors of the paper include Amro Alshareef, Alexandria Hwang, Zilianng Kang, Johannes Kuosmanen, Keiko Ishida, Joshua Jenkins, Sabrina Liu, Wiam Abdalla Mohammed Madani, Jochen Lennerz, Alison Hayward, Josh Morimoto, Nina Fitzgerald, and Robert Langer.

MIT engineers build a battery-free, wireless underwater camera

A battery-free, wireless underwater camera developed at MIT could have many uses, including climate modeling. “We are missing data from over 95 percent of the ocean. This technology could help us build more accurate climate models and better understand how climate change impacts the underwater world,” says Associate Professor Fadel Adib. Image: Adam Glanzman

By Adam Zewe | MIT News Office

Scientists estimate that more than 95 percent of Earth’s oceans have never been observed, which means we have seen less of our planet’s ocean than we have the far side of the moon or the surface of Mars.

The high cost of powering an underwater camera for a long time, by tethering it to a research vessel or sending a ship to recharge its batteries, is a steep challenge preventing widespread undersea exploration.

MIT researchers have taken a major step to overcome this problem by developing a battery-free, wireless underwater camera that is about 100,000 times more energy-efficient than other undersea cameras. The device takes color photos, even in dark underwater environments, and transmits image data wirelessly through the water.

The autonomous camera is powered by sound. It converts mechanical energy from sound waves traveling through water into electrical energy that powers its imaging and communications equipment. After capturing and encoding image data, the camera also uses sound waves to transmit data to a receiver that reconstructs the image. 

Because it doesn’t need a power source, the camera could run for weeks on end before retrieval, enabling scientists to search remote parts of the ocean for new species. It could also be used to capture images of ocean pollution or monitor the health and growth of fish raised in aquaculture farms.

“One of the most exciting applications of this camera for me personally is in the context of climate monitoring. We are building climate models, but we are missing data from over 95 percent of the ocean. This technology could help us build more accurate climate models and better understand how climate change impacts the underwater world,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab, and senior author of a new paper on the system.

Joining Adib on the paper are co-lead authors and Signal Kinetics group research assistants Sayed Saad Afzal, Waleed Akbar, and Osvy Rodriguez, as well as research scientist Unsoo Ha, and former group researchers Mario Doumet and Reza Ghaffarivardavagh. The paper is published in Nature Communications.

Going battery-free

To build a camera that could operate autonomously for long periods, the researchers needed a device that could harvest energy underwater on its own while consuming very little power.

The camera acquires energy using transducers made from piezoelectric materials that are placed around its exterior. Piezoelectric materials produce an electric signal when a mechanical force is applied to them. When a sound wave traveling through the water hits the transducers, they vibrate and convert that mechanical energy into electrical energy.

Those sound waves could come from any source, like a passing ship or marine life. The camera stores harvested energy until it has built up enough to power the electronics that take photos and communicate data.

To keep power consumption as a low as possible, the researchers used off-the-shelf, ultra-low-power imaging sensors. But these sensors only capture grayscale images. And since most underwater environments lack a light source, they needed to develop a low-power flash, too.

“We were trying to minimize the hardware as much as possible, and that creates new constraints on how to build the system, send information, and perform image reconstruction. It took a fair amount of creativity to figure out how to do this,” Adib says.

They solved both problems simultaneously using red, green, and blue LEDs. When the camera captures an image, it shines a red LED and then uses image sensors to take the photo. It repeats the same process with green and blue LEDs.

Even though the image looks black and white, the red, green, and blue colored light is reflected in the white part of each photo, Akbar explains. When the image data are combined in post-processing, the color image can be reconstructed.

“When we were kids in art class, we were taught that we could make all colors using three basic colors. The same rules follow for color images we see on our computers. We just need red, green, and blue — these three channels — to construct color images,” he says.

Fadel Adib (left) associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab, and Research Assistant Waleed Akbar display the battery-free wireless underwater camera that their group developed. Image: Adam Glanzman

Sending data with sound

Once image data are captured, they are encoded as bits (1s and 0s) and sent to a receiver one bit at a time using a process called underwater backscatter. The receiver transmits sound waves through the water to the camera, which acts as a mirror to reflect those waves. The camera either reflects a wave back to the receiver or changes its mirror to an absorber so that it does not reflect back.

A hydrophone next to the transmitter senses if a signal is reflected back from the camera. If it receives a signal, that is a bit-1, and if there is no signal, that is a bit-0. The system uses this binary information to reconstruct and post-process the image.

“This whole process, since it just requires a single switch to convert the device from a nonreflective state to a reflective state, consumes five orders of magnitude less power than typical underwater communications systems,” Afzal says.

The researchers tested the camera in several underwater environments. In one, they captured color images of plastic bottles floating in a New Hampshire pond. They were also able to take such high-quality photos of an African starfish that tiny tubercles along its arms were clearly visible. The device was also effective at repeatedly imaging the underwater plant Aponogeton ulvaceus in a dark environment over the course of a week to monitor its growth.

Now that they have demonstrated a working prototype, the researchers plan to enhance the device so it is practical for deployment in real-world settings. They want to increase the camera’s memory so it could capture photos in real-time, stream images, or even shoot underwater video.

They also want to extend the camera’s range. They successfully transmitted data 40 meters from the receiver, but pushing that range wider would enable the camera to be used in more underwater settings.

“This will open up great opportunities for research both in low-power IoT devices as well as underwater monitoring and research,” says Haitham Al-Hassanieh, an assistant professor of electrical and computer engineering at the University of Illinois Urbana-Champaign, who was not involved with this research.

This research is supported, in part, by the Office of Naval Research, the Sloan Research Fellowship, the National Science Foundation, the MIT Media Lab, and the Doherty Chair in Ocean Utilization.

New programmable materials can sense their own movements

This image shows 3D-printed crystalline lattice structures with air-filled channels, known as “fluidic sensors,” embedded into the structures (the indents on the middle of lattices are the outlet holes of the sensors.) These air channels let the researchers measure how much force the lattices experience when they are compressed or flattened. Image: Courtesy of the researchers, edited by MIT News

By Adam Zewe | MIT News Office

MIT researchers have developed a method for 3D printing materials with tunable mechanical properties, that sense how they are moving and interacting with the environment. The researchers create these sensing structures using just one material and a single run on a 3D printer.

To accomplish this, the researchers began with 3D-printed lattice materials and incorporated networks of air-filled channels into the structure during the printing process. By measuring how the pressure changes within these channels when the structure is squeezed, bent, or stretched, engineers can receive feedback on how the material is moving.

The method opens opportunities for embedding sensors within architected materials, a class of materials whose mechanical properties are programmed through form and composition. Controlling the geometry of features in architected materials alters their mechanical properties, such as stiffness or toughness. For instance, in cellular structures like the lattices the researchers print, a denser network of cells makes a stiffer structure.

This technique could someday be used to create flexible soft robots with embedded sensors that enable the robots to understand their posture and movements. It might also be used to produce wearable smart devices that provide feedback on how a person is moving or interacting with their environment.

“The idea with this work is that we can take any material that can be 3D-printed and have a simple way to route channels throughout it so we can get sensorization with structure. And if you use really complex materials, then you can have motion, perception, and structure all in one,” says co-lead author Lillian Chin, a graduate student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

Joining Chin on the paper are co-lead author Ryan Truby, a former CSAIL postdoc who is now as assistant professor at Northwestern University; Annan Zhang, a CSAIL graduate student; and senior author Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of CSAIL. The paper is published today in Science Advances.

Architected materials

The researchers focused their efforts on lattices, a type of “architected material,” which exhibits customizable mechanical properties based solely on its geometry. For instance, changing the size or shape of cells in the lattice makes the material more or less flexible.

While architected materials can exhibit unique properties, integrating sensors within them is challenging given the materials’ often sparse, complex shapes. Placing sensors on the outside of the material is typically a simpler strategy than embedding sensors within the material. However, when sensors are placed on the outside, the feedback they provide may not provide a complete description of how the material is deforming or moving.

Instead, the researchers used 3D printing to incorporate air-filled channels directly into the struts that form the lattice. When the structure is moved or squeezed, those channels deform and the volume of air inside changes. The researchers can measure the corresponding change in pressure with an off-the-shelf pressure sensor, which gives feedback on how the material is deforming.

Because they are incorporated into the material, these “fluidic sensors” offer advantages over conventional sensor materials.

This image shows a soft robotic finger made from two cylinders comprised of a new class of materials known as handed shearing auxetics (HSAs), which bend and rotate. Air-filled channels embedded within the HSA structure connect to pressure sensors (pile of chips in the foreground), which actively measure the pressure change of these “fluidic sensors.” Image: Courtesy of the researchers

“Sensorizing” structures

The researchers incorporate channels into the structure using digital light processing 3D printing. In this method, the structure is drawn out of a pool of resin and hardened into a precise shape using projected light. An image is projected onto the wet resin and areas struck by the light are cured.

But as the process continues, the resin remains stuck inside the sensor channels. The researchers had to remove excess resin before it was cured, using a mix of pressurized air, vacuum, and intricate cleaning.

They used this process to create several lattice structures and demonstrated how the air-filled channels generated clear feedback when the structures were squeezed and bent.

“Importantly, we only use one material to 3D print our sensorized structures. We bypass the limitations of other multimaterial 3D printing and fabrication methods that are typically considered for patterning similar materials,” says Truby.

Building off these results, they also incorporated sensors into a new class of materials developed for motorized soft robots known as handed shearing auxetics, or HSAs. HSAs can be twisted and stretched simultaneously, which enables them to be used as effective soft robotic actuators. But they are difficult to “sensorize” because of their complex forms.

They 3D printed an HSA soft robot capable of several movements, including bending, twisting, and elongating. They ran the robot through a series of movements for more than 18 hours and used the sensor data to train a neural network that could accurately predict the robot’s motion. 

Chin was impressed by the results — the fluidic sensors were so accurate she had difficulty distinguishing between the signals the researchers sent to the motors and the data that came back from the sensors.

“Materials scientists have been working hard to optimize architected materials for functionality. This seems like a simple, yet really powerful idea to connect what those researchers have been doing with this realm of perception. As soon as we add sensing, then roboticists like me can come in and use this as an active material, not just a passive one,” she says.

“Sensorizing soft robots with continuous skin-like sensors has been an open challenge in the field. This new method provides accurate proprioceptive capabilities for soft robots and opens the door for exploring the world through touch,” says Rus.

In the future, the researchers look forward to finding new applications for this technique, such as creating novel human-machine interfaces or soft devices that have sensing capabilities within the internal structure. Chin is also interested in utilizing machine learning to push the boundaries of tactile sensing for robotics.

“The use of additive manufacturing for directly building robots is attractive. It allows for the complexity I believe is required for generally adaptive systems,” says Robert Shepherd, associate professor at the Sibley School of Mechanical and Aerospace Engineering at Cornell University, who was not involved with this work. “By using the same 3D printing process to build the form, mechanism, and sensing arrays, their process will significantly contribute to researcher’s aiming to build complex robots simply.”

This research was supported, in part, by the National Science Foundation, the Schmidt Science Fellows Program in partnership with the Rhodes Trust, an NSF Graduate Fellowship, and the Fannie and John Hertz Foundation.

Q&A: Warehouse robots that feel by sight

Ted Adelson. Photo courtesy of the Department of Brain and Cognitive Sciences.

By Kim Martineau | MIT Schwarzman College of Computing

More than a decade ago, Ted Adelson set out to create tactile sensors for robots that would give them a sense of touch. The result? A handheld imaging system powerful enough to visualize the raised print on a dollar bill. The technology was spun into GelSight, to answer an industry need for low-cost, high-resolution imaging.

An expert in both human and machine vision, Adelson was pleased to have created something useful. But he never lost sight of his original dream: to endow robots with a sense of touch. In a new Science Hub project with Amazon, he’s back on the case. He plans to build out the GelSight system with added capabilities to sense temperature and vibrations. A professor in MIT’s Department of Brain and Cognitive Sciences, Adelson recently sat down to talk about his work.

Q: What makes the human hand so hard to recreate in a robot?

A: A human finger has soft, sensitive skin, which deforms as it touches things. The question is how to get precise sensing when the sensing surface itself is constantly moving and changing during manipulation.

Q: You’re an expert on human and computer vision. How did touch grab your interest?

A: When my daughters were babies, I was amazed by how skillfully they used their fingers and hands to explore the world. I wanted to understand the way they were gathering information through their sense of touch. Being a vision researcher, I naturally looked for a way to do it with cameras.

Q: How does the GelSight robot finger work? What are its limitations?

A: A camera captures an image of the skin from inside, and a computer vision system calculates the skin’s 3D deformation. GelSight fingers offer excellent tactile acuity, far exceeding that of human fingers. However, the need for an inner optical system limits the sizes and shapes we can achieve today.

Q: How did you come up with the idea of giving a robot finger a sense of touch by, in effect, giving it sight?

A: A camera can tell you about the geometry of the surface it is viewing. By putting a tiny camera inside the finger, we can measure how the skin geometry is changing from point to point. This tells us about tactile properties like force, shape, and texture.

Q: How did your prior work on cameras figure in?

A: My prior research on the appearance of reflective materials helped me engineer the optical properties of the skin. We create a very thin matte membrane and light it with grazing illumination so all the details can be seen.

Q: Did you know there was a market for measuring 3D surfaces?

A: No. My postdoc Kimo Johnson posted a YouTube video showing GelSight’s capabilities about a decade ago. The video went viral, and we got a flood of email with interesting suggested applications. People have since used the technology for measuring the microtexture of shark skin, packed snow, and sanded surfaces. The FBI uses it in forensics to compare spent cartridge casings.

Q: What’s GelSight’s main application?  

A: Industrial inspection. For example, an inspector can press a GelSight sensor against a scratch or bump on an airplane fuselage to measure its exact size and shape in 3D. This application may seem quite different from the original inspiration of baby fingers, but it shows that tactile sensing can have many uses. As for robotics, tactile sensing is mainly a research topic right now, but we expect it to increasingly be useful in industrial robots.

Q: You’re now building in a way to measure temperature and vibrations. How do you do that with a camera? How else will you try to emulate human touch?

A: You can convert temperature to a visual signal that a camera can read by using liquid crystals, the molecules that make mood rings and forehead thermometers change color. For vibrations we will use microphones. We also want to extend the range of shapes a finger can have. Finally, we need to understand how to use the information coming from the finger to improve robotics.

Q: Why are we sensitive to temperature and vibrations, and why is that useful for robotics?

A: Identifying material properties is an important aspect of touch. Sensing temperature helps you tell whether something is metal or wood, and whether it is wet or dry. Vibrations can help you distinguish a slightly textured surface, like unvarnished wood, from a perfectly smooth surface, like wood with a glossy finish.

Q: What’s next?

A: Making a tactile sensor is the first step. Integrating it into a useful finger and hand comes next. Then you have to get the robot to use the hand to perform real-world tasks.

Q: Evolution gave us five fingers and two hands. Will robots have the same?

A: Different robots will have different kinds of hands, optimized for different situations. Big hands, small hands, hands with three fingers or six fingers, and hands we can’t even imagine today. Our goal is to provide the sensing capability, so that the robot can skillfully interact with the world.

Robotic lightning bugs take flight

Inspired by fireflies, MIT researchers have created soft actuators that can emit light in different colors or patterns. Credits: Courtesy of the researchers

By Adam Zewe | MIT News Office

Fireflies that light up dusky backyards on warm summer evenings use their luminescence for communication — to attract a mate, ward off predators, or lure prey.

These glimmering bugs also sparked the inspiration of scientists at MIT. Taking a cue from nature, they built electroluminescent soft artificial muscles for flying, insect-scale robots. The tiny artificial muscles that control the robots’ wings emit colored light during flight.

This electroluminescence could enable the robots to communicate with each other. If sent on a search-and-rescue mission into a collapsed building, for instance, a robot that finds survivors could use lights to signal others and call for help.

The ability to emit light also brings these microscale robots, which weigh barely more than a paper clip, one step closer to flying on their own outside the lab. These robots are so lightweight that they can’t carry sensors, so researchers must track them using bulky infrared cameras that don’t work well outdoors. Now, they’ve shown that they can track the robots precisely using the light they emit and just three smartphone cameras.

“If you think of large-scale robots, they can communicate using a lot of different tools — Bluetooth, wireless, all those sorts of things. But for a tiny, power-constrained robot, we are forced to think about new modes of communication. This is a major step toward flying these robots in outdoor environments where we don’t have a well-tuned, state-of-the-art motion tracking system,” says Kevin Chen, who is the D. Reid Weedon, Jr. Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper.

He and his collaborators accomplished this by embedding miniscule electroluminescent particles into the artificial muscles. The process adds just 2.5 percent more weight without impacting the flight performance of the robot.

Joining Chen on the paper are EECS graduate students Suhan Kim, the lead author, and Yi-Hsuan Hsiao; Yu Fan Chen SM ’14, PhD ’17; and Jie Mao, an associate professor at Ningxia University. The research was published this month in IEEE Robotics and Automation Letters.

A light-up actuator

These researchers previously demonstrated a new fabrication technique to build soft actuators, or artificial muscles, that flap the wings of the robot. These durable actuators are made by alternating ultrathin layers of elastomer and carbon nanotube electrode in a stack and then rolling it into a squishy cylinder. When a voltage is applied to that cylinder, the electrodes squeeze the elastomer, and the mechanical strain flaps the wing.

To fabricate a glowing actuator, the team incorporated electroluminescent zinc sulphate particles into the elastomer but had to overcome several challenges along the way.

First, the researchers had to create an electrode that would not block light. They built it using highly transparent carbon nanotubes, which are only a few nanometers thick and enable light to pass through.

However, the zinc particles only light up in the presence of a very strong and high-frequency electric field. This electric field excites the electrons in the zinc particles, which then emit subatomic particles of light known as photons. The researchers use high voltage to create a strong electric field in the soft actuator, and then drive the robot at a high frequency, which enables the particles to light up brightly.

“Traditionally, electroluminescent materials are very energetically costly, but in a sense, we get that electroluminescence for free because we just use the electric field at the frequency we need for flying. We don’t need new actuation, new wires, or anything. It only takes about 3 percent more energy to shine out light,” Kevin Chen says.

As they prototyped the actuator, they found that adding zinc particles reduced its quality, causing it to break down more easily. To get around this, Kim mixed zinc particles into the top elastomer layer only. He made that layer a few micrometers thicker to accommodate for any reduction in output power.

While this made the actuator 2.5 percent heavier, it emitted light without impacting flight performance.

“We put a lot of care into maintaining the quality of the elastomer layers between the electrodes. Adding these particles was almost like adding dust to our elastomer layer. It took many different approaches and a lot of testing, but we came up with a way to ensure the quality of the actuator,” Kim says.

Adjusting the chemical combination of the zinc particles changes the light color. The researchers made green, orange, and blue particles for the actuators they built; each actuator shines one solid color.

They also tweaked the fabrication process so the actuators could emit multicolored and patterned light. The researchers placed a tiny mask over the top layer, added zinc particles, then cured the actuator. They repeated this process three times with different masks and colored particles to create a light pattern that spelled M-I-T.

These artificial muscles, which control the wings of featherweight flying robots, light up while the robot is in flight, which provides a low-cost way to track the robots and also could enable them to communicate. Credits: Courtesy of the researchers

Following the fireflies

Once they had finetuned the fabrication process, they tested the mechanical properties of the actuators and used a luminescence meter to measure the intensity of the light.

From there, they ran flight tests using a specially designed motion-tracking system. Each electroluminescent actuator served as an active marker that could be tracked using iPhone cameras. The cameras detect each light color, and a computer program they developed tracks the position and attitude of the robots to within 2 millimeters of state-of-the-art infrared motion capture systems.

“We are very proud of how good the tracking result is, compared to the state-of-the-art. We were using cheap hardware, compared to the tens of thousands of dollars these large motion-tracking systems cost, and the tracking results were very close,” Kevin Chen says.

In the future, they plan to enhance that motion tracking system so it can track robots in real-time. The team is working to incorporate control signals so the robots could turn their light on and off during flight and communicate more like real fireflies. They are also studying how electroluminescence could even improve some properties of these soft artificial muscles, Kevin Chen says.

“This work is really interesting because it minimizes the overhead (weight and power) for light generation without compromising flight performance,” says Kaushik Jayaram, an assistant professor in Department of Mechanical Engineering at the University of Colorado at Boulder, who was not involved with this research. “The wingbeat synchronized flash generation demonstrated in this work will make it easier for motion tracking and flight control of multiple microrobots in low-light environments both indoors and outdoors.”

“While the light production, the reminiscence of biological fireflies, and the potential use of communication presented in this work are extremely interesting, I believe the true momentum is that this latest development could turn out to be a milestone toward the demonstration of these robots outside controlled laboratory conditions,” adds Pakpong Chirarattananon, an associate professor in the Department of Biomedical Engineering at the City University of Hong Kong, who also was not involved with this work. “The illuminated actuators potentially act as active markers for external cameras to provide real-time feedback for flight stabilization to replace the current motion capture system. The electroluminescence would allow less sophisticated equipment to be used and the robots to be tracked from distance, perhaps via another larger mobile robot, for real-world deployment. That would be a remarkable breakthrough. I would be thrilled to see what the authors accomplish next.”

This work was supported by the Research Laboratory of Electronics at MIT.

At the forefront of building with biology

Ritu Raman, the d’Arbeloff Career Development Assistant Professor of Mechanical Engineering, focuses on building with biology, using living cells. Photo: David Sella

By Daniel de Wolff | MIT Industrial Liaison Program

It would seem that engineering is in Ritu Raman’s blood. Her mother is a chemical engineer, her father is a mechanical engineer, and her grandfather is a civil engineer. A common thread among her childhood experiences was witnessing firsthand the beneficial impact that engineering careers could have on communities. One of her earliest memories is watching her parents build communication towers to connect the rural villages of Kenya to the global infrastructure. She recalls the excitement she felt watching the emergence of a physical manifestation of innovation that would have a lasting positive impact on the community.  

Raman is, as she puts it, “a mechanical engineer through and through.” She earned her BS, MS, and PhD in mechanical engineering. Her postdoc at MIT was funded by a L’Oréal USA for Women in Science Fellowship and a Ford Foundation Fellowship from the National Academies of Sciences Engineering and Medicine.

Today, Ritu Raman leads the Raman Lab and is an assistant professor in the Department of Mechanical Engineering. But Raman is not tied to traditional notions of what mechanical engineers should be building or the materials typically associated with the field. “As a mechanical engineer, I’ve pushed back against the idea that people in my field only build cars and rockets from metals, polymers, and ceramics. I’m interested in building with biology, with living cells,” she says.

Our machines, from our phones to our cars, are designed with very specific purposes. And they aren’t cheap. But a dropped phone or a crashed car could mean the end of it, or at the very least an expensive repair bill. For the most part, that isn’t the case with our bodies. Biological materials have an unparalleled ability to sense, process, and respond to their environment in real-time. “As humans, if we cut our skin or if we fall, we’re able to heal,” says Raman. “So, I started wondering, ‘Why aren’t engineers building with the materials that have these dynamically responsive capabilities?’”

These days, Raman is focused on building actuators (devices that provide movement) powered by neurons and skeletal muscle that can teach us more about how we move and how we navigate the world. Specifically, she’s creating millimeter-scale models of skeletal muscle controlled by the motor neurons that help us plan and execute movement as well as the sensory neurons that tell us how to respond to dynamic changes in our environment.

Eventually, her actuators may guide the way to building better robots. Today, even our most advanced robots are a far cry from being able to reproduce human motion — our ability to run, leap, pivot on a dime, and change direction. But bioengineered muscle made in Raman’s lab has the potential to create robots that are more dynamically responsive to their environments.

Researchers release open-source photorealistic simulator for autonomous driving

VISTA 2.0 is an open-source simulation engine that can make realistic environments for training and testing self-driving cars. Credits: Image courtesy of MIT CSAIL.

By Rachel Gordon | MIT CSAIL

Hyper-realistic virtual worlds have been heralded as the best driving schools for autonomous vehicles (AVs), since they’ve proven fruitful test beds for safely trying out dangerous driving scenarios. Tesla, Waymo, and other self-driving companies all rely heavily on data to enable expensive and proprietary photorealistic simulators, since testing and gathering nuanced I-almost-crashed data usually isn’t the most easy or desirable to recreate. 

To that end, scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) created “VISTA 2.0,” a data-driven simulation engine where vehicles can learn to drive in the real world and recover from near-crash scenarios. What’s more, all of the code is being open-sourced to the public. 

“Today, only companies have software like the type of simulation environments and capabilities of VISTA 2.0, and this software is proprietary. With this release, the research community will have access to a powerful new tool for accelerating the research and development of adaptive robust control for autonomous driving,” says MIT Professor and CSAIL Director Daniela Rus, senior author on a paper about the research. 

VISTA is a data-driven, photorealistic simulator for autonomous driving. It can simulate not just live video but LiDAR data and event cameras, and also incorporate other simulated vehicles to model complex driving situations. VISTA is open source and the code can be found here.

VISTA 2.0 builds off of the team’s previous model, VISTA, and it’s fundamentally different from existing AV simulators since it’s data-driven — meaning it was built and photorealistically rendered from real-world data — thereby enabling direct transfer to reality. While the initial iteration supported only single car lane-following with one camera sensor, achieving high-fidelity data-driven simulation required rethinking the foundations of how different sensors and behavioral interactions can be synthesized. 

Enter VISTA 2.0: a data-driven system that can simulate complex sensor types and massively interactive scenarios and intersections at scale. With much less data than previous models, the team was able to train autonomous vehicles that could be substantially more robust than those trained on large amounts of real-world data. 

“This is a massive jump in capabilities of data-driven simulation for autonomous vehicles, as well as the increase of scale and ability to handle greater driving complexity,” says Alexander Amini, CSAIL PhD student and co-lead author on two new papers, together with fellow PhD student Tsun-Hsuan Wang. “VISTA 2.0 demonstrates the ability to simulate sensor data far beyond 2D RGB cameras, but also extremely high dimensional 3D lidars with millions of points, irregularly timed event-based cameras, and even interactive and dynamic scenarios with other vehicles as well.” 

The team was able to scale the complexity of the interactive driving tasks for things like overtaking, following, and negotiating, including multiagent scenarios in highly photorealistic environments. 

Training AI models for autonomous vehicles involves hard-to-secure fodder of different varieties of edge cases and strange, dangerous scenarios, because most of our data (thankfully) is just run-of-the-mill, day-to-day driving. Logically, we can’t just crash into other cars just to teach a neural network how to not crash into other cars.

Recently, there’s been a shift away from more classic, human-designed simulation environments to those built up from real-world data. The latter have immense photorealism, but the former can easily model virtual cameras and lidars. With this paradigm shift, a key question has emerged: Can the richness and complexity of all of the sensors that autonomous vehicles need, such as lidar and event-based cameras that are more sparse, accurately be synthesized? 

Lidar sensor data is much harder to interpret in a data-driven world — you’re effectively trying to generate brand-new 3D point clouds with millions of points, only from sparse views of the world. To synthesize 3D lidar point clouds, the team used the data that the car collected, projected it into a 3D space coming from the lidar data, and then let a new virtual vehicle drive around locally from where that original vehicle was. Finally, they projected all of that sensory information back into the frame of view of this new virtual vehicle, with the help of neural networks. 

Together with the simulation of event-based cameras, which operate at speeds greater than thousands of events per second, the simulator was capable of not only simulating this multimodal information, but also doing so all in real time — making it possible to train neural nets offline, but also test online on the car in augmented reality setups for safe evaluations. “The question of if multisensor simulation at this scale of complexity and photorealism was possible in the realm of data-driven simulation was very much an open question,” says Amini. 

With that, the driving school becomes a party. In the simulation, you can move around, have different types of controllers, simulate different types of events, create interactive scenarios, and just drop in brand new vehicles that weren’t even in the original data. They tested for lane following, lane turning, car following, and more dicey scenarios like static and dynamic overtaking (seeing obstacles and moving around so you don’t collide). With the multi-agency, both real and simulated agents interact, and new agents can be dropped into the scene and controlled any which way. 

Taking their full-scale car out into the “wild” — a.k.a. Devens, Massachusetts — the team saw  immediate transferability of results, with both failures and successes. They were also able to demonstrate the bodacious, magic word of self-driving car models: “robust.” They showed that AVs, trained entirely in VISTA 2.0, were so robust in the real world that they could handle that elusive tail of challenging failures. 

Now, one guardrail humans rely on that can’t yet be simulated is human emotion. It’s the friendly wave, nod, or blinker switch of acknowledgement, which are the type of nuances the team wants to implement in future work. 

“The central algorithm of this research is how we can take a dataset and build a completely synthetic world for learning and autonomy,” says Amini. “It’s a platform that I believe one day could extend in many different axes across robotics. Not just autonomous driving, but many areas that rely on vision and complex behaviors. We’re excited to release VISTA 2.0 to help enable the community to collect their own datasets and convert them into virtual worlds where they can directly simulate their own virtual autonomous vehicles, drive around these virtual terrains, train autonomous vehicles in these worlds, and then can directly transfer them to full-sized, real self-driving cars.” 

Amini and Wang wrote the paper alongside Zhijian Liu, MIT CSAIL PhD student; Igor Gilitschenski, assistant professor in computer science at the University of Toronto; Wilko Schwarting, AI research scientist and MIT CSAIL PhD ’20; Song Han, associate professor at MIT’s Department of Electrical Engineering and Computer Science; Sertac Karaman, associate professor of aeronautics and astronautics at MIT; and Daniela Rus, MIT professor and CSAIL director. The researchers presented the work at the IEEE International Conference on Robotics and Automation (ICRA) in Philadelphia. 

This work was supported by the National Science Foundation and Toyota Research Institute. The team acknowledges the support of NVIDIA with the donation of the Drive AGX Pegasus.

An easier way to teach robots new skills

MIT researchers have developed a system that enables a robot to learn a new pick-and-place task based on only a handful of human examples. This could allow a human to reprogram a robot to grasp never-before-seen objects, presented in random poses, in about 15 minutes. Courtesy of the researchers

By Adam Zewe | MIT News Office

With e-commerce orders pouring in, a warehouse robot picks mugs off a shelf and places them into boxes for shipping. Everything is humming along, until the warehouse processes a change and the robot must now grasp taller, narrower mugs that are stored upside down.

Reprogramming that robot involves hand-labeling thousands of images that show it how to grasp these new mugs, then training the system all over again.

But a new technique developed by MIT researchers would require only a handful of human demonstrations to reprogram the robot. This machine-learning method enables a robot to pick up and place never-before-seen objects that are in random poses it has never encountered. Within 10 to 15 minutes, the robot would be ready to perform a new pick-and-place task.

The technique uses a neural network specifically designed to reconstruct the shapes of 3D objects. With just a few demonstrations, the system uses what the neural network has learned about 3D geometry to grasp new objects that are similar to those in the demos.

In simulations and using a real robotic arm, the researchers show that their system can effectively manipulate never-before-seen mugs, bowls, and bottles, arranged in random poses, using only 10 demonstrations to teach the robot.

“Our major contribution is the general ability to much more efficiently provide new skills to robots that need to operate in more unstructured environments where there could be a lot of variability. The concept of generalization by construction is a fascinating capability because this problem is typically so much harder,” says Anthony Simeonov, a graduate student in electrical engineering and computer science (EECS) and co-lead author of the paper.

Simeonov wrote the paper with co-lead author Yilun Du, an EECS graduate student; Andrea Tagliasacchi, a staff research scientist at Google Brain; Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering; and senior authors Pulkit Agrawal, a professor in CSAIL, and Vincent Sitzmann, an incoming assistant professor in EECS. The research will be presented at the International Conference on Robotics and Automation.

Grasping geometry

A robot may be trained to pick up a specific item, but if that object is lying on its side (perhaps it fell over), the robot sees this as a completely new scenario. This is one reason it is so hard for machine-learning systems to generalize to new object orientations.

To overcome this challenge, the researchers created a new type of neural network model, a Neural Descriptor Field (NDF), that learns the 3D geometry of a class of items. The model computes the geometric representation for a specific item using a 3D point cloud, which is a set of data points or coordinates in three dimensions. The data points can be obtained from a depth camera that provides information on the distance between the object and a viewpoint. While the network was trained in simulation on a large dataset of synthetic 3D shapes, it can be directly applied to objects in the real world.

The team designed the NDF with a property known as equivariance. With this property, if the model is shown an image of an upright mug, and then shown an image of the same mug on its side, it understands that the second mug is the same object, just rotated.

“This equivariance is what allows us to much more effectively handle cases where the object you observe is in some arbitrary orientation,” Simeonov says.

As the NDF learns to reconstruct shapes of similar objects, it also learns to associate related parts of those objects. For instance, it learns that the handles of mugs are similar, even if some mugs are taller or wider than others, or have smaller or longer handles.

“If you wanted to do this with another approach, you’d have to hand-label all the parts. Instead, our approach automatically discovers these parts from the shape reconstruction,” Du says.

The researchers use this trained NDF model to teach a robot a new skill with only a few physical examples. They move the hand of the robot onto the part of an object they want it to grip, like the rim of a bowl or the handle of a mug, and record the locations of the fingertips.

Because the NDF has learned so much about 3D geometry and how to reconstruct shapes, it can infer the structure of a new shape, which enables the system to transfer the demonstrations to new objects in arbitrary poses, Du explains.

Picking a winner

They tested their model in simulations and on a real robotic arm using mugs, bowls, and bottles as objects. Their method had a success rate of 85 percent on pick-and-place tasks with new objects in new orientations, while the best baseline was only able to achieve a success rate of 45 percent. Success means grasping a new object and placing it on a target location, like hanging mugs on a rack.

Many baselines use 2D image information rather than 3D geometry, which makes it more difficult for these methods to integrate equivariance. This is one reason the NDF technique performed so much better.

While the researchers were happy with its performance, their method only works for the particular object category on which it is trained. A robot taught to pick up mugs won’t be able to pick up boxes or headphones, since these objects have geometric features that are too different than what the network was trained on.

“In the future, scaling it up to many categories or completely letting go of the notion of category altogether would be ideal,” Simeonov says.

They also plan to adapt the system for nonrigid objects and, in the longer term, enable the system to perform pick-and-place tasks when the target area changes.

This work is supported, in part, by the Defense Advanced Research Projects Agency, the Singapore Defense Science and Technology Agency, and the National Science Foundation.

A flexible way to grab items with feeling

The GelSight Fin Ray gripper holds a glass Mason jar with its tactile sensing. Photo courtesy of MIT CSAIL.

By Rachel Gordon | MIT CSAIL

The notion of a large metallic robot that speaks in monotone and moves in lumbering, deliberate steps is somewhat hard to shake. But practitioners in the field of soft robotics have an entirely different image in mind — autonomous devices composed of compliant parts that are gentle to the touch, more closely resembling human fingers than R2-D2 or Robby the Robot.

That model is now being pursued by Professor Edward Adelson and his Perceptual Science Group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). In a recent project, Adelson and Sandra Liu — a mechanical engineering PhD student at CSAIL — have developed a robotic gripper using novel “GelSight Fin Ray” fingers that, like the human hand, is supple enough to manipulate objects. What sets this work apart from other efforts in the field is that Liu and Adelson have endowed their gripper with touch sensors that can meet or exceed the sensitivity of human skin.

Their work was presented last week at the 2022 IEEE 5th International Conference on Soft Robotics.

The fin ray has become a popular item in soft robotics owing to a discovery made in 1997 by the German biologist Leif Kniese. He noticed that when he pushed against a fish’s tail with his finger, the ray would bend toward the applied force, almost embracing his finger, rather than tilting away. The design has become popular, but it lacks tactile sensitivity. “It’s versatile because it can passively adapt to different shapes and therefore grasp a variety of objects,” Liu explains. “But in order to go beyond what others in the field had already done, we set out to incorporate a rich tactile sensor into our gripper.”

The gripper consists of two flexible fin ray fingers that conform to the shape of the object they come in contact with. The fingers themselves are assembled from flexible plastic materials made on a 3D printer, which is pretty standard in the field. However, the fingers typically used in soft robotic grippers have supportive cross-struts running through the length of their interiors, whereas Liu and Adelson hollowed out the interior region so they could create room for their camera and other sensory components.

The camera is mounted to a semirigid backing on one end of the hollowed-out cavity, which is, itself, illuminated by LEDs. The camera faces a layer of “sensory” pads composed of silicone gel (known as “GelSight”) that is glued to a thin layer of acrylic material. The acrylic sheet, in turn, is attached to the plastic finger piece at the opposite end of the inner cavity. Upon touching an object, the finger will seamlessly fold around it, melding to the object’s contours. By determining exactly how the silicone and acrylic sheets are deformed during this interaction, the camera — along with accompanying computational algorithms — can assess the general shape of the object, its surface roughness, its orientation in space, and the force being applied by (and imparted to) each finger.

Liu and Adelson tested out their gripper in an experiment during which just one of the two fingers was “sensorized.” Their device successfully handled such items as a mini-screwdriver, a plastic strawberry, an acrylic paint tube, a Ball Mason jar, and a wine glass. While the gripper was holding the fake strawberry, for instance, the internal sensor was able to detect the “seeds” on its surface. The fingers grabbed the paint tube without squeezing so hard as to breach the container and spill its contents.

The GelSight sensor could even make out the lettering on the Mason jar, and did so in a rather clever way. The overall shape of the jar was ascertained first by seeing how the acrylic sheet was bent when wrapped around it. That pattern was then subtracted, by a computer algorithm, from the deformation of the silicone pad, and what was left was the more subtle deformation due just to the letters.

Glass objects are challenging for vision-based robots because of the refraction of the light. Tactile sensors are immune to such optical ambiguity. When the gripper picked up the wine glass, it could feel the orientation of the stem and could make sure the glass was pointing straight up before it was slowly lowered. When the base touched the tabletop, the gel pad sensed the contact. Proper placement occurred in seven out of 10 trials and, thankfully, no glass was harmed during the filming of this experiment.

Wenzhen Yuan, an assistant professor in the Robotics Institute at Carnegie Mellon University who was not invovled with the research, says, “Sensing with soft robots has been a big challenge, because it is difficult to set up sensors — which are traditionally rigid — on soft bodies,” Yuan says. “This paper provides a neat solution to that problem. The authors used a very smart design to make their vision-based sensor work for the compliant gripper, in this way generating very good results when robots grasp objects or interact with the external environment. The technology has lots of potential to be widely used for robotic grippers in real-world environments.”

Liu and Adelson can foresee many possible applications for the GelSight Fin Ray, but they are first contemplating some improvements. By hollowing out the finger to clear space for their sensory system, they introduced a structural instability, a tendency to twist, that they believe can be counteracted through better design. They want to make GelSight sensors that are compatible with soft robots devised by other research teams. And they also plan to develop a three-fingered gripper that could be useful in such tasks as picking up pieces of fruit and evaluating their ripeness.

Tactile sensing, in their approach, is based on inexpensive components: a camera, some gel, and some LEDs. Liu hopes that with a technology like GelSight, “it may be possible to come up with sensors that are both practical and affordable.” That, at least, is one goal that she and others in the lab are striving toward.

The Toyota Research Institute and the U.S. Office of Naval Research provided funds to support this work.

Page 3 of 12
1 2 3 4 5 12