Page 5 of 12
1 3 4 5 6 7 12

System detects errors when medication is self-administered

The new technology pairs wireless sensing with artificial intelligence to determine when a patient is using an insulin pen or inhaler, and it flags potential errors in the patient’s administration method. | Image: courtery of the researchers

From swallowing pills to injecting insulin, patients frequently administer their own medication. But they don’t always get it right. Improper adherence to doctors’ orders is commonplace, accounting for thousands of deaths and billions of dollars in medical costs annually. MIT researchers have developed a system to reduce those numbers for some types of medications.

The new technology pairs wireless sensing with artificial intelligence to determine when a patient is using an insulin pen or inhaler, and flags potential errors in the patient’s administration method. “Some past work reports that up to 70% of patients do not take their insulin as prescribed, and many patients do not use inhalers properly,” says Dina Katabi, the Andrew and Erna Viteri Professor at MIT, whose research group has developed the new solution. The researchers say the system, which can be installed in a home, could alert patients and caregivers to medication errors and potentially reduce unnecessary hospital visits.

The research appears today in the journal Nature Medicine. The study’s lead authors are Mingmin Zhao, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and Kreshnik Hoti, a former visiting scientist at MIT and current faculty member at the University of Prishtina in Kosovo. Other co-authors include Hao Wang, a former CSAIL postdoc and current faculty member at Rutgers University, and Aniruddh Raghu, a CSAIL PhD student.

Some common drugs entail intricate delivery mechanisms. “For example, insulin pens require priming to make sure there are no air bubbles inside. And after injection, you have to hold for 10 seconds,” says Zhao. “All those little steps are necessary to properly deliver the drug to its active site.” Each step also presents opportunity for errors, especially when there’s no pharmacist present to offer corrective tips. Patients might not even realize when they make a mistake — so Zhao’s team designed an automated system that can.

Their system can be broken down into three broad steps. First, a sensor tracks a patient’s movements within a 10-meter radius, using radio waves that reflect off their body. Next, artificial intelligence scours the reflected signals for signs of a patient self-administering an inhaler or insulin pen. Finally, the system alerts the patient or their health care provider when it detects an error in the patient’s self-administration.

Wireless sensing technology could help improve patients’ technique with inhalers and insulin pens. | Image: Christine Daniloff, MIT

The researchers adapted their sensing method from a wireless technology they’d previously used to monitor people’s sleeping positions. It starts with a wall-mounted device that emits very low-power radio waves. When someone moves, they modulate the signal and reflect it back to the device’s sensor. Each unique movement yields a corresponding pattern of modulated radio waves that the device can decode. “One nice thing about this system is that it doesn’t require the patient to wear any sensors,” says Zhao. “It can even work through occlusions, similar to how you can access your Wi-Fi when you’re in a different room from your router.”

The new sensor sits in the background at home, like a Wi-Fi router, and uses artificial intelligence to interpret the modulated radio waves. The team developed a neural network to key in on patterns indicating the use of an inhaler or insulin pen. They trained the network to learn those patterns by performing example movements, some relevant (e.g. using an inhaler) and some not (e.g. eating). Through repetition and reinforcement, the network successfully detected 96 percent of insulin pen administrations and 99 percent of inhaler uses.

Once it mastered the art of detection, the network also proved useful for correction. Every proper medicine administration follows a similar sequence — picking up the insulin pen, priming it, injecting, etc. So, the system can flag anomalies in any particular step. For example, the network can recognize if a patient holds down their insulin pen for five seconds instead of the prescribed 10 seconds. The system can then relay that information to the patient or directly to their doctor, so they can fix their technique.

“By breaking it down into these steps, we can not only see how frequently the patient is using their device, but also assess their administration technique to see how well they’re doing,” says Zhao.

The researchers say a key feature of their radio wave-based system is its noninvasiveness. “An alternative way to solve this problem is by installing cameras,” says Zhao. “But using a wireless signal is much less intrusive. It doesn’t show peoples’ appearance.”

He adds that their framework could be adapted to medications beyond inhalers and insulin pens — all it would take is retraining the neural network to recognize the appropriate sequence of movements. Zhao says that “with this type of sensing technology at home, we could detect issues early on, so the person can see a doctor before the problem is exacerbated.”

Fostering ethical thinking in computing

A case studies series from the Social and Ethical Responsibilities of Computing program at MIT delves into a range of topics, from social and ethical implications of computing technologies and the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings to the biases of risk prediction algorithms in the criminal justice system and the politicization of data collection.

By Terri Park | MIT Schwarzman College of Computing

Traditional computer scientists and engineers are trained to develop solutions for specific needs, but aren’t always trained to consider their broader implications. Each new technology generation, and particularly the rise of artificial intelligence, leads to new kinds of systems, new ways of creating tools, and new forms of data, for which norms, rules, and laws frequently have yet to catch up. The kinds of impact that such innovations have in the world has often not been apparent until many years later.

As part of the efforts in Social and Ethical Responsibilities of Computing (SERC) within the MIT Stephen A. Schwarzman College of Computing, a new case studies series examines social, ethical, and policy challenges of present-day efforts in computing with the aim of facilitating the development of responsible “habits of mind and action” for those who create and deploy computing technologies.

“Advances in computing have undeniably changed much of how we live and work. Understanding and incorporating broader social context is becoming ever more critical,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing. “This case study series is designed to be a basis for discussions in the classroom and beyond, regarding social, ethical, economic, and other implications so that students and researchers can pursue the development of technology across domains in a holistic manner that addresses these important issues.”

A modular system

By design, the case studies are brief and modular to allow users to mix and match the content to fit a variety of pedagogical needs. Series editors David Kaiser and Julie Shah, who are the associate deans for SERC, structured the cases primarily to be appropriate for undergraduate instruction across a range of classes and fields of study.

“Our goal was to provide a seamless way for instructors to integrate cases into an existing course or cluster several cases together to support a broader module within a course. They might also use the cases as a starting point to design new courses that focus squarely on themes of social and ethical responsibilities of computing,” says Kaiser, the Germeshausen Professor of the History of Science and professor of physics.

Shah, an associate professor of aeronautics and astronautics and a roboticist who designs systems in which humans and machines operate side by side, expects that the cases will also be of interest to those outside of academia, including computing professionals, policy specialists, and general readers. In curating the series, Shah says that “we interpret ‘social and ethical responsibilities of computing’ broadly to focus on perspectives of people who are affected by various technologies, as well as focus on perspectives of designers and engineers.”

The cases are not limited to a particular format and can take shape in various forms — from a magazine-like feature article or Socratic dialogues to choose-your-own-adventure stories or role-playing games grounded in empirical research. Each case study is brief, but includes accompanying notes and references to facilitate more in-depth exploration of a given topic. Multimedia projects will also be considered. “The main goal is to present important material — based on original research — in engaging ways to broad audiences of non-specialists,” says Kaiser.

The SERC case studies are specially commissioned and written by scholars who conduct research centrally on the subject of the piece. Kaiser and Shah approached researchers from within MIT as well as from other academic institutions to bring in a mix of diverse voices on a spectrum of topics. Some cases focus on a particular technology or on trends across platforms, while others assess social, historical, philosophical, legal, and cultural facets that are relevant for thinking critically about current efforts in computing and data sciences.

The cases published in the inaugural issue place readers in various settings that challenge them to consider the social and ethical implications of computing technologies, such as how social media services and surveillance tools are built; the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings; the biases of risk prediction algorithms in the criminal justice system; and the politicization of data collection.

“Most of us agree that we want computing to work for social good, but which good? Whose good? Whose needs and values and worldviews are prioritized and whose are overlooked?” says Catherine D’Ignazio, an assistant professor of urban science and planning and director of the Data + Feminism Lab at MIT.

D’Ignazio’s case for the series, co-authored with Lauren Klein, an associate professor in the English and Quantitative Theory and Methods departments at Emory University, introduces readers to the idea that while data are useful, they are not always neutral. “These case studies help us understand the unequal histories that shape our technological systems as well as study their disparate outcomes and effects. They are an exciting step towards holistic, sociotechnical thinking and making.”

Rigorously reviewed

Kaiser and Shah formed an editorial board composed of 55 faculty members and senior researchers associated with 19 departments, labs, and centers at MIT, and instituted a rigorous peer-review policy model commonly adopted by specialized journals. Members of the editorial board will also help commission topics for new cases and help identify authors for a given topic.

For each submission, the series editors collect four to six peer reviews, with reviewers mostly drawn from the editorial board. For each case, half the reviewers come from fields in computing and data sciences and half from fields in the humanities, arts, and social sciences, to ensure balance of topics and presentation within a given case study and across the series.

“Over the past two decades I’ve become a bit jaded when it comes to the academic review process, and so I was particularly heartened to see such care and thought put into all of the reviews,” says Hany Farid, a professor at the University of California at Berkeley with a joint appointment in the Department of Electrical Engineering and Computer Sciences and the School of Information. “The constructive review process made our case study significantly stronger.”

Farid’s case, “The Dangers of Risk Prediction in the Criminal Justice System,” which he penned with Julia Dressel, recently a student of computer science at Dartmouth College, is one of the four commissioned pieces featured in the inaugural issue.

Cases are additionally reviewed by undergraduate volunteers, who help the series editors gauge each submission for balance, accessibility for students in multiple fields of study, and possibilities for adoption in specific courses. The students also work with them to create original homework problems and active learning projects to accompany each case study, to further facilitate adoption of the original materials across a range of existing undergraduate subjects.

“I volunteered to work with this group because I believe that it’s incredibly important for those working in computer science to include thinking about ethics not as an afterthought, but integrated into every step and decision that is made, says Annie Snyder, a mathematical economics sophomore and a member of the MIT Schwarzman College of Computing’s Undergraduate Advisory Group. “While this is a massive issue to take on, this project is an amazing opportunity to start building an ethical culture amongst the incredibly talented students at MIT who will hopefully carry it forward into their own projects and workplace.”

New sets of case studies, produced with support from the MIT Press’ Open Publishing Services program, will be published twice a year via the Knowledge Futures Group’s PubPub platform. The SERC case studies are made available for free on an open-access basis, under Creative Commons licensing terms. Authors retain copyright, enabling them to reuse and republish their work in more specialized scholarly publications.

“It was important to us to approach this project in an inclusive way and lower the barrier for people to be able to access this content. These are complex issues that we need to deal with, and we hope that by making the cases widely available, more people will engage in social and ethical considerations as they’re studying and developing computing technologies,” says Shah.

Researchers introduce a new generation of tiny, agile drones

Insects’ remarkable acrobatic traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Image: courtesy of Kevin Yufeng Chen

By Daniel Ackerman

If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.

Chen, a member of the Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, has developed insect-sized drones with unprecedented dexterity and resilience. The aerial robots are powered by a new class of soft actuator, which allows them to withstand the physical travails of real-world flight. Chen hopes the robots could one day aid humans by pollinating crops or performing machinery inspections in cramped spaces.

Chen’s work appears this month in the journal IEEE Transactions on Robotics. His co-authors include MIT PhD student Zhijian Ren, Harvard University PhD student Siyi Xu, and City University of Hong Kong roboticist Pakpong Chirarattananon.

Typically, drones require wide open spaces because they’re neither nimble enough to navigate confined spaces nor robust enough to withstand collisions in a crowd. “If we look at most drones today, they’re usually quite big,” says Chen. “Most of their applications involve flying outdoors. The question is: Can you create insect-scale robots that can move around in very complex, cluttered spaces?”

According to Chen, “The challenge of building small aerial robots is immense.” Pint-sized drones require a fundamentally different construction from larger ones. Large drones are usually powered by motors, but motors lose efficiency as you shrink them. So, Chen says, for insect-like robots “you need to look for alternatives.”

The principal alternative until now has been employing a small, rigid actuator built from piezoelectric ceramic materials. While piezoelectric ceramics allowed the first generation of tiny robots to take flight, they’re quite fragile. And that’s a problem when you’re building a robot to mimic an insect — foraging bumblebees endure a collision about once every second.

Chen designed a more resilient tiny drone using soft actuators instead of hard, fragile ones. The soft actuators are made of thin rubber cylinders coated in carbon nanotubes. When voltage is applied to the carbon nanotubes, they produce an electrostatic force that squeezes and elongates the rubber cylinder. Repeated elongation and contraction causes the drone’s wings to beat — fast.

Chen’s actuators can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” says Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.

“Achieving flight with a centimeter-scale robot is always an impressive feat,” says Farrell Helbling, an assistant professor of electrical and computer engineering at Cornell University, who was not involved in the research. “Because of the soft actuators’ inherent compliance, the robot can safely run into obstacles without greatly inhibiting flight. This feature is well-suited for flight in cluttered, dynamic environments and could be very useful for any number of real-world applications.”

Helbling adds that a key step toward those applications will be untethering the robots from a wired power source, which is currently required by the actuators’ high operating voltage. “I’m excited to see how the authors will reduce operating voltage so that they may one day be able to achieve untethered flight in real-world environments.”

Building insect-like robots can provide a window into the biology and physics of insect flight, a longstanding avenue of inquiry for researchers. Chen’s work addresses these questions through a kind of reverse engineering. “If you want to learn how insects fly, it is very instructive to build a scale robot model,” he says. “You can perturb a few things and see how it affects the kinematics or how the fluid forces change. That will help you understand how those things fly.” But Chen aims to do more than add to entomology textbooks. His drones can also be useful in industry and agriculture.

Chen says his mini-aerialists could navigate complex machinery to ensure safety and functionality. “Think about the inspection of a turbine engine. You’d want a drone to move around [an enclosed space] with a small camera to check for cracks on the turbine plates.”

Other potential applications include artificial pollination of crops or completing search-and-rescue missions following a disaster. “All those things can be very challenging for existing large-scale robots,” says Chen. Sometimes, bigger isn’t better.

Fabricating fully functional drones

A laser-cutter with a custom end-effector
The MIT CSAIL team’s LaserFactory system can manufacture functional, custom-made devices and robots, without human intervention, potentially enabling rapid prototyping of items like wearables, robots, and printed electronics. Credits: Photo courtesy of the researchers.

By Rachel Gordon | MIT CSAIL

From Star Trek’s replicators to Richie Rich’s wishing machine, popular culture has a long history of parading flashy machines that can instantly output any item to a user’s delight. 

While 3D printers have now made it possible to produce a range of objects that include product models, jewelry, and novelty toys, we still lack the ability to fabricate more complex devices that are essentially ready-to-go right out of the printer. 

A group from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) recently developed a new system to print functional, custom-made devices and robots, without human intervention. Their single system uses a three-ingredient recipe that lets users create structural geometry, print traces, and assemble electronic components like sensors and actuators. 

“LaserFactory” has two parts that work in harmony: a software toolkit that allows users to design custom devices, and a hardware platform that fabricates them. 

CSAIL PhD student Martin Nisser says that this type of “one-stop shop” could be beneficial for product developers, makers, researchers, and educators looking to rapidly prototype things like wearables, robots, and printed electronics. 

“Making fabrication inexpensive, fast, and accessible to a layman remains a challenge,” says Nisser, lead author on a paper about LaserFactory that will appear in the ACM Conference on Human Factors in Computing Systems in May. “By leveraging widely available manufacturing platforms like 3D printers and laser cutters, LaserFactory is the first system that integrates these capabilities and automates the full pipeline for making functional devices in one system.” 

Inside LaserFactory

Let’s say a user has aspirations to create their own drone. They’d first design their device by placing components on it from a parts library, and then draw on circuit traces, which are the copper or aluminum lines on a printed circuit board that allow electricity to flow between electronic components. They’d then finalize the drone’s geometry in the 2D editor. In this case, they’d use propellers and batteries on the canvas, wire them up to make electrical connections, and draw the perimeter to define the quadcopter’s shape. 

The user can then preview their design before the software translates their custom blueprint into machine instructions. The commands are embedded into a single fabrication file for LaserFactory to make the device in one go, aided by the standard laser cutter software. On the hardware side, an add-on that prints circuit traces and assembles components is clipped onto the laser cutter. 

Similar to a chef, LaserFactory automatically cuts the geometry, dispenses silver for circuit traces, picks and places components, and finally cures the silver to make the traces conductive, securing the components in place to complete fabrication. 

The device is then fully functional, and in the case of the drone, it can immediately take off to begin a task — a feature that could in theory be used for diverse jobs such as delivery or search-and-rescue operations.

LaserFactory cutting a drone
LaserFactory (in background) can fabricate fully functional drones like this quadcopter. Credits: Photo courtesy of the researchers.

As a future avenue, the team hopes to increase the quality and resolution of the circuit traces, which would allow for denser and more complex electronics. 

As well as fine-tuning the current system, the researchers hope to build on this technology by exploring how to create a fuller range of 3D geometries, potentially through integrating traditional 3D printing into the process. 

“Beyond engineering, we’re also thinking about how this kind of one-stop shop for fabrication devices could be optimally integrated into today’s existing supply chains for manufacturing, and what challenges we may need to solve to allow for that to happen,” says Nisser. “In the future, people shouldn’t be expected to have an engineering degree to build robots, any more than they should have a computer science degree to install software.” 

This research is based upon work supported by the National Science Foundation. The work was also supported by a Microsoft Research Faculty Fellowship and The Royal Swedish Academy of Sciences.

Designing customized “brains” for robots

Illustration of robot surrounded by checked/crossed brains
MIT researchers have developed an automated way to design customized hardware, or “brains,” that speeds up a robot’s operation. Image credits: Jose-Luis Olivares, MIT

By Daniel Ackerman

Contemporary robots can move quickly. “The motors are fast, and they’re powerful,” says Sabrina Neuman.

Yet in complex situations, like interactions with people, robots often don’t move quickly. “The hang up is what’s going on in the robot’s head,” she adds.

Perceiving stimuli and calculating a response takes a “boatload of computation,” which limits reaction time, says Neuman, who recently graduated with a PhD from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Neuman has found a way to fight this mismatch between a robot’s “mind” and body. The method, called robomorphic computing, uses a robot’s physical layout and intended applications to generate a customized computer chip that minimizes the robot’s response time.

The advance could fuel a variety of robotics applications, including, potentially, frontline medical care of contagious patients. “It would be fantastic if we could have robots that could help reduce risk for patients and hospital workers,” says Neuman.

Neuman will present the research at this April’s International Conference on Architectural Support for Programming Languages and Operating Systems. MIT co-authors include graduate student Thomas Bourgeat and Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Neuman’s PhD advisor. Other co-authors include Brian Plancher, Thierry Tambe, and Vijay Janapa Reddi, all of Harvard University. Neuman is now a postdoctoral NSF Computing Innovation Fellow at Harvard’s School of Engineering and Applied Sciences.

There are three main steps in a robot’s operation, according to Neuman. The first is perception, which includes gathering data using sensors or cameras. The second is mapping and localization: “Based on what they’ve seen, they have to construct a map of the world around them and then localize themselves within that map,” says Neuman. The third step is motion planning and control — in other words, plotting a course of action.

These steps can take time and an awful lot of computing power. “For robots to be deployed into the field and safely operate in dynamic environments around humans, they need to be able to think and react very quickly,” says Plancher. “Current algorithms cannot be run on current CPU hardware fast enough.”

Neuman adds that researchers have been investigating better algorithms, but she thinks software improvements alone aren’t the answer. “What’s relatively new is the idea that you might also explore better hardware.” That means moving beyond a standard-issue CPU processing chip that comprises a robot’s brain — with the help of hardware acceleration.

Hardware acceleration refers to the use of a specialized hardware unit to perform certain computing tasks more efficiently. A commonly used hardware accelerator is the graphics processing unit (GPU), a chip specialized for parallel processing. These devices are handy for graphics because their parallel structure allows them to simultaneously process thousands of pixels. “A GPU is not the best at everything, but it’s the best at what it’s built for,” says Neuman. “You get higher performance for a particular application.” Most robots are designed with an intended set of applications and could therefore benefit from hardware acceleration. That’s why Neuman’s team developed robomorphic computing.

The system creates a customized hardware design to best serve a particular robot’s computing needs. The user inputs the parameters of a robot, like its limb layout and how its various joints can move. Neuman’s system translates these physical properties into mathematical matrices. These matrices are “sparse,” meaning they contain many zero values that roughly correspond to movements that are impossible given a robot’s particular anatomy. (Similarly, your arm’s movements are limited because it can only bend at certain joints — it’s not an infinitely pliable spaghetti noodle.)

The system then designs a hardware architecture specialized to run calculations only on the non-zero values in the matrices. The resulting chip design is therefore tailored to maximize efficiency for the robot’s computing needs. And that customization paid off in testing.

Hardware architecture designed using this method for a particular application outperformed off-the-shelf CPU and GPU units. While Neuman’s team didn’t fabricate a specialized chip from scratch, they programmed a customizable field-programmable gate array (FPGA) chip according to their system’s suggestions. Despite operating at a slower clock rate, that chip performed eight times faster than the CPU and 86 times faster than the GPU.

“I was thrilled with those results,” says Neuman. “Even though we were hamstrung by the lower clock speed, we made up for it by just being more efficient.”

Plancher sees widespread potential for robomorphic computing. “Ideally we can eventually fabricate a custom motion-planning chip for every robot, allowing them to quickly compute safe and efficient motions,” he says. “I wouldn’t be surprised if 20 years from now every robot had a handful of custom computer chips powering it, and this could be one of them.” Neuman adds that robomorphic computing might allow robots to relieve humans of risk in a range of settings, such as caring for covid-19 patients or manipulating heavy objects.

“This work is exciting because it shows how specialized circuit designs can be used to accelerate a core component of robot control,” says Robin Deits, a robotics engineer at Boston Dynamics who was not involved in the research. “Software performance is crucial for robotics because the real world never waits around for the robot to finish thinking.” He adds that Neuman’s advance could enable robots to think faster, “unlocking exciting behaviors that previously would be too computationally difficult.”

Neuman next plans to automate the entire system of robomorphic computing. Users will simply drag and drop their robot’s parameters, and “out the other end comes the hardware description. I think that’s the thing that’ll push it over the edge and make it really useful.”

This research was funded by the National Science Foundation, the Computing Research Agency, the CIFellows Project, and the Defense Advanced Research Projects Agency.

Versatile building blocks make structures with surprising mechanical properties

Mechanical metamaterials
CBA researchers have created four different types of novel subunits, called voxels (a 3D variation on the pixels of a 2D image). Left to right: rigid (grey), compliant (purple), auxetic (orange), chiral (blue). Image credits: Benjamin Jenett, CBA

By David L. Chandler

Researchers at MIT’s Center for Bits and Atoms have created tiny building blocks that exhibit a variety of unique mechanical properties, such as the ability to produce a twisting motion when squeezed. These subunits could potentially be assembled by tiny robots into a nearly limitless variety of objects with built-in functionality, including vehicles, large industrial parts, or specialized robots that can be repeatedly reassembled in different forms.

The researchers created four different types of these subunits, called voxels (a 3D variation on the pixels of a 2D image). Each voxel type exhibits special properties not found in typical natural materials, and in combination they can be used to make devices that respond to environmental stimuli in predictable ways. Examples might include airplane wings or turbine blades that respond to changes in air pressure or wind speed by changing their overall shape.

The findings, which detail the creation of a family of discrete “mechanical metamaterials,” are described in a paper published in the journal Science Advances, authored by recent MIT doctoral graduate Benjamin Jenett PhD ’20, Professor Neil Gershenfeld, and four others.

“This remarkable, fundamental, and beautiful synthesis promises to revolutionize the cost, tailorability, and functional efficiency of ultralight, materials-frugal structures,” says Amory Lovins, an adjunct professor of civil and environmental engineering at Stanford University and founder of Rocky Mountain Institute, who was not associated with this work.

Metamaterials get their name because their large-scale properties are different from the microlevel properties of their component materials. They are used in electromagnetics and as “architected” materials, which are designed at the level of their microstructure. “But there hasn’t been much done on creating macroscopic mechanical properties as a metamaterial,” Gershenfeld says.

With this approach, engineers should be able to build structures incorporating a wide range of material properties — and produce them all using the same shared production and assembly processes, Gershenfeld says.

The voxels are assembled from flat frame pieces of injection-molded polymers, then combined into three-dimensional shapes that can be joined into larger functional structures. They are mostly open space and thus provide an extremely lightweight but rigid framework when assembled. Besides the basic rigid unit, which provides an exceptional combination of strength and light weight, there are three other variations of these voxels, each with a different unusual property.

The “auxetic” voxels have a strange property in which a cube of the material, when compressed, instead of bulging out at the sides, actually bulges inward. This is the first demonstration of such a material produced through conventional and inexpensive manufacturing methods.

There are also “compliant” voxels, with a zero Poisson ratio, which is somewhat similar to the auxetic property, but in this case, when the material is compressed, the sides do not change shape at all. Few known materials exhibit this property, which can now be produced through this new approach.

Finally, “chiral” voxels respond to axial compression or stretching with a twisting motion. Again, this is an uncommon property; research that produced one such material through complex fabrication techniques was hailed last year as a significant finding. This work makes this property easily accessible at macroscopic scales.

“Each type of material property we’re showing has previously been its own field,” Gershenfeld says. “People would write papers on just that one property. This is the first thing that shows all of them in one single system.”

To demonstrate the real-world potential of large objects constructed in a LEGO-like manner out of these mass-produced voxels, the team, working in collaboration with engineers at Toyota, produced a functional super-mileage race car, which they demonstrated on a rece track during an international robotics conference earlier this year.

They were able to assemble the lightweight, high-performance structure in just a month, Jenett says, whereas building a comparable structure using conventional fiberglass construction methods had previously taken a year.

During the race, the track was slick from rain, and the race car ended up crashing into a barrier. To the surprise of everyone involved, the car’s lattice-like internal structure deformed and then bounced back, absorbing the shock with little damage. A conventionally built car, Jenett says, would likely have been severely dented if it was made of metal, or shattered if it was composite.

The car provided a vivid demonstration of the fact that these tiny parts can indeed be used to make functional devices at human-sized scales. And, Gershenfeld points out, in the structure of the car, “these aren’t parts connected to something else. The whole thing is made out of nothing but these parts,” except for the motors and power supply.

Because the voxels are uniform in size and composition, they can be combined in any way needed to provide different functions for the resulting device. “We can span a wide range of material properties that before now have been considered very specialized,” Gershenfeld says. “The point is that you don’t have to pick one property. You can make, for example, robots that bend in one direction and are stiff in another direction and move only in certain ways. And so, the big change over our earlier work is this ability to span multiple mechanical material properties, that before now have been considered in isolation.”

Jenett, who carried out much of this work as the basis for his doctoral thesis, says “these parts are low-cost, easily produced, and very fast to assemble, and you get this range of properties all in one system. They’re all compatible with each other, so there’s all these different types of exotic properties, but they all play well with each other in the same scalable, inexpensive system.”

“Think about all the rigid parts and moving parts in cars and robots and boats and planes,” Gershenfeld says. “And we can span all of that with this one system.”

A key factor is that a structure made up of one type of these voxels will behave exactly the same way as the subunit itself, Jenett says. “We were able to demonstrate that the joints effectively disappear when you assemble the parts together. It behaves as a continuum, monolithic material.”

Whereas robotics research has tended to be divided between hard and soft robots, “this is very much neither,” Gershenfeld says, because of its potential to mix and match these properties within a single device.

One of the possible early application of this technology, Jenett says, could be for building the blades of wind turbines. As these structures become ever larger, transporting the blades to their operating site becomes a serious logistical issue, whereas if they are assembled from thousands of tiny subunits, that job can be done at the site, eliminating the transportation issue. Similarly, the disposal of used turbine blades is already becoming a serious problem because of their large size and lack of recyclability. But blades made up of tiny voxels could be disassembled on site, and the voxels then reused to make something else.

And in addition, the blades themselves could be more efficient, because they could have a mix of mechanical properties designed into the structure that would allow them to respond dynamically, passively, to changes in wind strength, he says.

Overall, Jenett says, “Now we have this low-cost, scalable system, so we can design whatever we want to. We can do quadrupeds, we can do swimming robots, we can do flying robots. That flexibility is one of the key benefits of the system.”

Stanford’s Lovins says that this technology “could make inexpensive, durable, extraordinarily lightweight aeronautical flight surfaces that passively and continuously optimize their shape like a bird’s wing. It could also make automobiles’ empty mass more nearly approach their payload, as their crashworthy structure becomes mostly air. It may even permit spherical shells whose crush strength allows a vacuum balloon (with no helium) buoyant in the atmosphere to lift a couple of dozen times the net payload of a jumbo jet.”

He adds, “Like biomimicry and integrative design, this new art of cellular metamaterials is a powerful new tool for helping us do more with less.”

The research team included Filippos Tourlomousis, Alfonso Parra Rubio, and Megan Ochalek at MIT, and Christopher Cameron at the U.S. Army Research Laboratory. The work was supported by NASA, the U.S. Army Research Laboratory and the Center for Bits and Atoms Consortia.

Electronic design tool morphs interactive objects

MorphSensor glasses
An MIT team used MorphSensor to design multiple applications, including a pair of glasses that monitor light absorption to protect eye health. Credits: Photo courtesy of the researchers.

By Rachel Gordon

We’ve come a long way since the first 3D-printed item came to us by way of an eye wash cup, to now being able to rapidly fabricate things like car parts, musical instruments, and even biological tissues and organoids

While much of these objects can be freely designed and quickly made, the addition of electronics to embed things like sensors, chips, and tags usually requires that you design both separately, making it difficult to create items where the added functions are easily integrated with the form. 

Now, a 3D design environment from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) lets users iterate an object’s shape and electronic function in one cohesive space, to add existing sensors to early-stage prototypes.

The team tested the system, called MorphSensor, by modeling an N95 mask with a humidity sensor, a temperature-sensing ring, and glasses that monitor light absorption to protect eye health.

MorphSensor automatically converts electronic designs into 3D models, and then lets users iterate on the geometry and manipulate active sensing parts. This might look like a 2D image of a pair of AirPods and a sensor template, where a person could edit the design until the sensor is embedded, printed, and taped onto the item. 

To test the effectiveness of MorphSensor, the researchers created an evaluation based on standard industrial assembly and testing procedures. The data showed that MorphSensor could match the off-the-shelf sensor modules with small error margins, for both the analog and digital sensors.

“MorphSensor fits into my long-term vision of something called ‘rapid function prototyping’, with the objective to create interactive objects where the functions are directly integrated with the form and fabricated in one go, even for non-expert users,” says CSAIL PhD student Junyi Zhu, lead author on a new paper about the project. “This offers the promise that, when prototyping, the object form could follow its designated function, and the function could adapt to its physical form.” 

MorphSensor in action 

Imagine being able to have your own design lab where, instead of needing to buy new items, you could cost-effectively update your own items using a single system for both design and hardware. 

For example, let’s say you want to update your face mask to monitor surrounding air quality. Using MorphSensor, users would first design or import the 3D face mask model and sensor modules from either MorphSensor’s database or online open-sourced files. The system would then generate a 3D model with individual electronic components (with airwires connected between them) and color-coding to highlight the active sensing components.  

Designers can then drag and drop the electronic components directly onto the face mask, and rotate them based on design needs. As a final step, users draw physical wires onto the design where they want them to appear, using the system’s guidance to connect the circuit. 

Once satisfied with the design, the “morphed sensor” can be rapidly fabricated using an inkjet printer and conductive tape, so it can be adhered to the object. Users can also outsource the design to a professional fabrication house.  

To test their system, the team iterated on EarPods for sleep tracking, which only took 45 minutes to design and fabricate. They also updated a “weather-aware” ring to provide weather advice, by integrating a temperature sensor with the ring geometry. In addition, they manipulated an N95 mask to monitor its substrate contamination, enabling it to alert its user when the mask needs to be replaced.

In its current form, MorphSensor helps designers maintain connectivity of the circuit at all times, by highlighting which components contribute to the actual sensing. However, the team notes it would be beneficial to expand this set of support tools even further, where future versions could potentially merge electrical logic of multiple sensor modules together to eliminate redundant components and circuits and save space (or preserve the object form). 

Zhu wrote the paper alongside MIT graduate student Yunyi Zhu; undergraduates Jiaming Cui, Leon Cheng, Jackson Snowden, and Mark Chounlakone; postdoc Michael Wessely; and Professor Stefanie Mueller. The team will virtually present their paper at the ACM User Interface Software and Technology Symposium. 

This material is based upon work supported by the National Science Foundation.

Robot takes contact-free measurements of patients’ vital signs

By Anne Trafton

During the current coronavirus pandemic, one of the riskiest parts of a health care worker’s job is assessing people who have symptoms of Covid-19. Researchers from MIT, Boston Dynamics, and Brigham and Women’s Hospital hope to reduce that risk by using robots to remotely measure patients’ vital signs.

The robots, which are controlled by a handheld device, can also carry a tablet that allows doctors to ask patients about their symptoms without being in the same room.

“In robotics, one of our goals is to use automation and robotic technology to remove people from dangerous jobs,” says Henwei Huang, an MIT postdoc. “We thought it should be possible for us to use a robot to remove the health care worker from the risk of directly exposing themselves to the patient.”

Using four cameras mounted on a dog-like robot developed by Boston Dynamics, the researchers have shown that they can measure skin temperature, breathing rate, pulse rate, and blood oxygen saturation in healthy patients, from a distance of 2 meters. They are now making plans to test it in patients with Covid-19 symptoms.

“We are thrilled to have forged this industry-academia partnership in which scientists with engineering and robotics expertise worked with clinical teams at the hospital to bring sophisticated technologies to the bedside,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

The researchers have posted a paper on their system on the preprint server techRxiv, and have submitted it to a peer-reviewed journal. Huang is one of the lead authors of the study, along with Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital, and Claas Ehmke, a visiting scholar from ETH Zurich.

Measuring vital signs

When Covid-19 cases began surging in Boston in March, many hospitals, including Brigham and Women’s, set up triage tents outside their emergency departments to evaluate people with Covid-19 symptoms. One major component of this initial evaluation is measuring vital signs, including body temperature.

The MIT and BWH researchers came up with the idea to use robotics to enable contactless monitoring of vital signs, to allow health care workers to minimize their exposure to potentially infectious patients. They decided to use existing computer vision technologies that can measure temperature, breathing rate, pulse, and blood oxygen saturation, and worked to make them mobile.

To achieve that, they used a robot known as Spot, which can walk on four legs, similarly to a dog. Health care workers can maneuver the robot to wherever patients are sitting, using a handheld controller. The researchers mounted four different cameras onto the robot — an infrared camera plus three monochrome cameras that filter different wavelengths of light.

The researchers developed algorithms that allow them to use the infrared camera to measure both elevated skin temperature and breathing rate. For body temperature, the camera measures skin temperature on the face, and the algorithm correlates that temperature with core body temperature. The algorithm also takes into account the ambient temperature and the distance between the camera and the patient, so that measurements can be taken from different distances, under different weather conditions, and still be accurate.

Measurements from the infrared camera can also be used to calculate the patient’s breathing rate. As the patient breathes in and out, wearing a mask, their breath changes the temperature of the mask. Measuring this temperature change allows the researchers to calculate how rapidly the patient is breathing.

The three monochrome cameras each filter a different wavelength of light — 670, 810, and 880 nanometers. These wavelengths allow the researchers to measure the slight color changes that result when hemoglobin in blood cells binds to oxygen and flows through blood vessels. The researchers’ algorithm uses these measurements to calculate both pulse rate and blood oxygen saturation.

“We didn’t really develop new technology to do the measurements,” Huang says. “What we did is integrate them together very specifically for the Covid application, to analyze different vital signs at the same time.”

Continuous monitoring

In this study, the researchers performed the measurements on healthy volunteers, and they are now making plans to test their robotic approach in people who are showing symptoms of Covid-19, in a hospital emergency department.

While in the near term, the researchers plan to focus on triage applications, in the longer term, they envision that the robots could be deployed in patients’ hospital rooms. This would allow the robots to continuously monitor patients and also allow doctors to check on them, via tablet, without having to enter the room. Both applications would require approval from the U.S. Food and Drug Administration.

The research was funded by the MIT Department of Mechanical Engineering and the Karl van Tassel (1925) Career Development Professorship, and Boston Dynamics.

Data systems that learn to be better

One of the biggest challenges in computing is handling a staggering onslaught of information while still being able to efficiently store and process it.

By Adam Conner-Simons

Big data has gotten really, really big: By 2025, all the world’s data will add up to an estimated 175 trillion gigabytes. For a visual, if you stored that amount of data on DVDs, it would stack up tall enough to circle the Earth 222 times. 

One of the biggest challenges in computing is handling this onslaught of information while still being able to efficiently store and process it. A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that the answer rests with something called “instance-optimized systems.”  

Traditional storage and database systems are designed to work for a wide range of applications because of how long it can take to build them — months or, often, several years. As a result, for any given workload such systems provide performance that is good, but usually not the best. Even worse, they sometimes require administrators to painstakingly tune the system by hand to provide even reasonable performance. 

In contrast, the goal of instance-optimized systems is to build systems that optimize and partially re-organize themselves for the data they store and the workload they serve. 

“It’s like building a database system for every application from scratch, which is not economically feasible with traditional system designs,” says MIT Professor Tim Kraska. 

As a first step toward this vision, Kraska and colleagues developed Tsunami and Bao. Tsunami uses machine learning to automatically re-organize a dataset’s storage layout based on the types of queries that its users make. Tests show that it can run queries up to 10 times faster than state-of-the-art systems. What’s more, its datasets can be organized via a series of “learned indexes” that are up to 100 times smaller than the indexes used in traditional systems. 

Kraska has been exploring the topic of learned indexes for several years, going back to his influential work with colleagues at Google in 2017. 

Harvard University Professor Stratos Idreos, who was not involved in the Tsunami project, says that a unique advantage of learned indexes is their small size, which, in addition to space savings, brings substantial performance improvements.

“I think this line of work is a paradigm shift that’s going to impact system design long-term,” says Idreos. “I expect approaches based on models will be one of the core components at the heart of a new wave of adaptive systems.”

Bao, meanwhile, focuses on improving the efficiency of query optimization through machine learning. A query optimizer rewrites a high-level declarative query to a query plan, which can actually be executed over the data to compute the result to the query. However, often there exists more than one query plan to answer any query; picking the wrong one can cause a query to take days to compute the answer, rather than seconds. 

Traditional query optimizers take years to build, are very hard to maintain, and, most importantly, do not learn from their mistakes. Bao is the first learning-based approach to query optimization that has been fully integrated into the popular database management system PostgreSQL. Lead author Ryan Marcus, a postdoc in Kraska’s group, says that Bao produces query plans that run up to 50 percent faster than those created by the PostgreSQL optimizer, meaning that it could help to significantly reduce the cost of cloud services, like Amazon’s Redshift, that are based on PostgreSQL.

By fusing the two systems together, Kraska hopes to build the first instance-optimized database system that can provide the best possible performance for each individual application without any manual tuning. 

The goal is to not only relieve developers from the daunting and laborious process of tuning database systems, but to also provide performance and cost benefits that are not possible with traditional systems.

Traditionally, the systems we use to store data are limited to only a few storage options and, because of it, they cannot provide the best possible performance for a given application. What Tsunami can do is dynamically change the structure of the data storage based on the kinds of queries that it receives and create new ways to store data, which are not feasible with more traditional approaches.

Johannes Gehrke, a managing director at Microsoft Research who also heads up machine learning efforts for Microsoft Teams, says that his work opens up many interesting applications, such as doing so-called “multidimensional queries” in main-memory data warehouses. Harvard’s Idreos also expects the project to spur further work on how to maintain the good performance of such systems when new data and new kinds of queries arrive.

Bao is short for “bandit optimizer,” a play on words related to the so-called “multi-armed bandit” analogy where a gambler tries to maximize their winnings at multiple slot machines that have different rates of return. The multi-armed bandit problem is commonly found in any situation that has tradeoffs between exploring multiple different options, versus exploiting a single option — from risk optimization to A/B testing.

“Query optimizers have been around for years, but they often make mistakes, and usually they don’t learn from them,” says Kraska. “That’s where we feel that our system can make key breakthroughs, as it can quickly learn for the given data and workload what query plans to use and which ones to avoid.”

Kraska says that in contrast to other learning-based approaches to query optimization, Bao learns much faster and can outperform open-source and commercial optimizers with as little as one hour of training time.In the future, his team aims to integrate Bao into cloud systems to improve resource utilization in environments where disk, RAM, and CPU time are scarce resources.

“Our hope is that a system like this will enable much faster query times, and that people will be able to answer questions they hadn’t been able to answer before,” says Kraska.

A related paper about Tsunami was co-written by Kraska, PhD students Jialin Ding and Vikram Nathan, and MIT Professor Mohammad Alizadeh. A paper about Bao was co-written by Kraska, Marcus, PhD students Parimarjan Negi and Hongzi Mao, visiting scientist Nesime Tatbul, and Alizadeh.

The work was done as part of the Data System and AI Lab (DSAIL@CSAIL), which is sponsored by Intel, Google, Microsoft, and the U.S. National Science Foundation. 

Letting robots manipulate cables


By Rachel Gordon
The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position.
Photo courtesy of MIT CSAIL.

For humans, it can be challenging to manipulate thin flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing, and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.

Standard approaches have used a series of slow and incremental deformations, as well as mechanical fixtures, to get the job done. Recently, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) pursued the task from a different angle, in a manner that more closely mimics us humans. The team’s new system uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.

One could imagine using a system like this for both industrial and household tasks, to one day enable robots to help us with things like tying knots, wire shaping, or even surgical suturing. 

The team’s first step was to build a novel two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based “GelSight” sensors, built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.

The team’s second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers, and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength, while the other adjusts the gripper pose to keep the cable within the gripper.

When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable “hand over hand” (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.

As a further demo of its prowess, the robot performed an action that humans routinely do when plugging earbuds into a cell phone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack. 

“Manipulating soft objects is so common in our daily lives, like cable manipulation, cloth folding, and string knotting,” says Yu She, MIT postdoc and lead author on a new paper about the system. “In many cases, we would like to have robots help humans do this kind of work, especially when the tasks are repetitive, dull, or unsafe.” 

String me along 

Cable following is challenging for two reasons. First, it requires controlling the “grasp force” (to enable smooth sliding), and the “grasp pose” (to prevent the cable from falling from the gripper’s fingers).  

This information is hard to capture from conventional vision systems during continuous manipulation, because it’s usually occluded, expensive to interpret, and sometimes inaccurate. 

What’s more, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile sensors. The gripper’s joints are also flexible — protecting them from potential impact. 

The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds. 

When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others. For example, the “open-loop” controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many regrasps to finish the task. 

Looking ahead 

The team observed that it was difficult to pull the cable back when it reached the edge of the finger, because of the convex surface of the GelSight sensor. Therefore, they hope to improve the finger-sensor shape to enhance the overall performance. 

In the future, they plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles, and they want to eventually explore autonomous cable manipulation tasks in the auto industry.

Yu She wrote the paper alongside MIT PhD students Shaoxiong Wang, Siyuan Dong, and Neha Sunil; Alberto Rodriguez, MIT associate professor of mechanical engineering; and Edward Adelson, the John and Dorothy Wilson Professor in the MIT Department of Brain and Cognitive Sciences

Robots help some firms, even while workers across industries struggle

A new study co-authored by an MIT professor shows firms that move quickly to use robots tend to add workers to their payroll, while industry job losses are more concentrated in firms that make this change more slowly.
Image: Stock photo

This is part 2 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu. 

By Peter Dizikes

Overall, adding robots to manufacturing reduces jobs — by more than three per robot, in fact. But a new study co-authored by an MIT professor reveals an important pattern: Firms that move quickly to use robots tend to add workers to their payroll, while industry job losses are more concentrated in firms that make this change more slowly.

The study, by MIT economist Daron Acemoglu, examines the introduction of robots to French manufacturing in recent decades, illuminating the business dynamics and labor implications in granular detail.

“When you look at use of robots at the firm level, it is really interesting because there is an additional dimension,” says Acemoglu. “We know firms are adopting robots in order to reduce their costs, so it is quite plausible that firms adopting robots early are going to expand at the expense of their competitors whose costs are not going down. And that’s exactly what we find.”

Indeed, as the study shows, a 20 percentage point increase in robot use in manufacturing from 2010 to 2015 led to a 3.2 percent decline in industry-wide employment. And yet, for firms adopting robots during that timespan, employee hours worked rose by 10.9 percent, and wages rose modestly as well.

A new paper detailing the study, “Competing with Robots: Firm-Level Evidence from France,” will appear in the May issue of the American Economic Association: Papers and Proceedings. The authors are Acemoglu, who is an Institute Professor at MIT; Clair Lelarge, a senior research economist at the Banque de France and the Center for Economic Policy Research; and Pascual Restrepo Phd ’16, an assistant professor of economics at Boston University.

A French robot census

To conduct the study, the scholars examined 55,390 French manufacturing firms, of which 598 purchased robots during the period from 2010 to 2015. The study uses data provided by France’s Ministry of Industry, client data from French robot suppliers, customs data about imported robots, and firm-level financial data concerning sales, employment, and wages, among other things.

The 598 firms that did purchase robots, while comprising just 1 percent of manufacturing firms, accounted for about 20 percent of manufacturing production during that five-year period.

“Our paper is unique in that we have an almost comprehensive [view] of robot adoption,” Acemoglu says.

The manufacturing industries most heavily adding robots to their production lines in France were pharmaceutical companies, chemicals and plastic manufacturers, food and beverage producers, metal and machinery manufacturers, and automakers.

The industries investing least in robots from 2010 to 2015 included paper and printing, textiles and apparel manufacturing, appliance manufacturers, furniture makers, and minerals companies.

The firms that did add robots to their manufacturing processes became more productive and profitable, and the use of automation lowered their labor share — the part of their income going to workers — between roughly 4 and 6 percentage points. However, because their investments in technology fueled more growth and more market share, they added more workers overall.

By contrast, the firms that did not add robots saw no change in the labor share, and for every 10 percentage point increase in robot adoption by their competitors, these firms saw their own employment drop 2.5 percent. Essentially, the firms not investing in technology were losing ground to their competitors.

This dynamic — job growth at robot-adopting firms, but job losses overall — fits with another finding Acemoglu and Restrepo made in a separate paper about the effects of robots on employment in the U.S. There, the economists found that each robot added to the work force essentially eliminated 3.3 jobs nationally.

“Looking at the result, you might think [at first] it’s the opposite of the U.S. result, where the robot adoption goes hand in hand with destruction of jobs, whereas in France, robot-adopting firms are expanding their employment,” Acemoglu says. “But that’s only because they’re expanding at the expense of their competitors. What we show is that when we add the indirect effect on those competitors, the overall effect is negative and comparable to what we find the in the U.S.”

Superstar firms and the labor share issue

The competitive dynamics the researchers found in France resemble those in another high-profile piece of economics research recently published by MIT professors. In a recent paper, MIT economists David Autor and John Van Reenen, along with three co-authors, published evidence indicating the decline in the labor share in the U.S. as a whole was driven by gains made by “superstar firms,” which find ways to lower their labor share and gain market power.

While those elite firms may hire more workers and even pay relatively well as they grow, labor share declines in their industries, overall.

“It’s very complementary,” Acemoglu observes about the work of Autor and Van Reenen. However, he notes, “A slight difference is that superstar firms [in the work of Autor and Van Reenen, in the U.S.] could come from many different sources. By having this individual firm-level technology data, we are able to show that a lot of this is about automation.”

So, while economists have offered many possible explanations for the decline of the labor share generally — including technology, tax policy, changes in labor market institutions, and more — Acemoglu suspects technology, and automation specifically, is the prime candidate, certainly in France.

“A big part of the [economic] literature now on technology, globalization, labor market institutions, is turning to the question of what explains the decline in the labor share,” Acemoglu says. “Many of those are reasonably interesting hypotheses, but in France it’s only the firms that adopt robots — and they are very large firms — that are reducing their labor share, and that’s what accounts for the entirety of the decline in the labor share in French manufacturing. This really emphasizes that automation, and in particular robots, is a critical part in understanding what’s going on.”

How many jobs do robots really replace?

MIT professor Daron Acemoglu is co-author of a new study showing that each robot added to the workforce has the effect of replacing 3.3 jobs across the U.S.
Image: Stock image edited by MIT News
By Peter Dizikes

This is part 1 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu.  

In many parts of the U.S., robots have been replacing workers over the last few decades. But to what extent, really? Some technologists have forecast that automation will lead to a future without work, while other observers have been more skeptical about such scenarios.

Now a study co-authored by an MIT professor puts firm numbers on the trend, finding a very real impact — although one that falls well short of a robot takeover. The study also finds that in the U.S., the impact of robots varies widely by industry and region, and may play a notable role in exacerbating income inequality.

“We find fairly major negative employment effects,” MIT economist Daron Acemoglu says, although he notes that the impact of the trend can be overstated.

From 1990 to 2007, the study shows, adding one additional robot per 1,000 workers reduced the national employment-to-population ratio by about 0.2 percent, with some areas of the U.S. affected far more than others.

This means each additional robot added in manufacturing replaced about 3.3 workers nationally, on average.

That increased use of robots in the workplace also lowered wages by roughly 0.4 percent during the same time period.

“We find negative wage effects, that workers are losing in terms of real wages in more affected areas, because robots are pretty good at competing against them,” Acemoglu says.

The paper, “Robots and Jobs: Evidence from U.S. Labor Markets,” appears in advance online form in the Journal of Political Economy. The authors are Acemoglu and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.

Displaced in Detroit

To conduct the study, Acemoglu and Restrepo used data on 19 industries, compiled by the International Federation of Robotics (IFR), a Frankfurt-based industry group that keeps detailed statistics on robot deployments worldwide. The scholars combined that with U.S.-based data on population, employment, business, and wages, from the U.S. Census Bureau, the Bureau of Economic Analysis, and the Bureau of Labor Statistics, among other sources.

The researchers also compared robot deployment in the U.S. to that of other countries, finding it lags behind that of Europe. From 1993 to 2007, U.S. firms actually did introduce almost exactly one new robot per 1,000 workers; in Europe, firms introduced 1.6 new robots per 1,000 workers.

“Even though the U.S. is a technologically very advanced economy, in terms of industrial robots’ production and usage and innovation, it’s behind many other advanced economies,” Acemoglu says.

In the U.S., four manufacturing industries account for 70 percent of robots: automakers (38 percent of robots in use), electronics (15 percent), the plastics and chemical industry (10 percent), and metals manufacturers (7 percent).

Across the U.S., the study analyzed the impact of robots in 722 commuting zones in the continental U.S. — essentially metropolitan areas — and found considerable geographic variation in how intensively robots are utilized.

Given industry trends in robot deployment, the area of the country most affected is the seat of the automobile industry. Michigan has the highest concentration of robots in the workplace, with employment in Detroit, Lansing, and Saginaw affected more than anywhere else in the country.

“Different industries have different footprints in different places in the U.S.,” Acemoglu observes. “The place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere].”

In commuting zones where robots were added to the workforce, each robot replaces about 6.6 jobs locally, the researchers found. However, in a subtle twist, adding robots in manufacturing benefits people in other industries and other areas of the country — by lowering the cost of goods, among other things. These national economic benefits are the reason the researchers calculated that adding one robot replaces 3.3 jobs for the country as a whole.

The inequality issue

In conducting the study, Acemoglu and Restrepo went to considerable lengths to see if the employment trends in robot-heavy areas might have been caused by other factors, such as trade policy, but they found no complicating empirical effects.

The study does suggest, however, that robots have a direct influence on income inequality. The manufacturing jobs they replace come from parts of the workforce without many other good employment options; as a result, there is a direct connection between automation in robot-using industries and sagging incomes among blue-collar workers.

“There are major distributional implications,” Acemoglu says. When robots are added to manufacturing plants, “The burden falls on the low-skill and especially middle-skill workers. That’s really an important part of our overall research [on robots], that automation actually is a much bigger part of the technological factors that have contributed to rising inequality over the last 30 years.”

So while claims about machines wiping out human work entirely may be overstated, the research by Acemoglu and Restrepo shows that the robot effect is a very real one in manufacturing, with significant social implications.

“It certainly won’t give any support to those who think robots are going to take all of our jobs,” Acemoglu says. “But it does imply that automation is a real force to be grappled with.”

Study finds stronger links between automation and inequality


By Peter Dizikes

This is part 3 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu. 

Modern technology affects different workers in different ways. In some white-collar jobs — designer, engineer — people become more productive with sophisticated software at their side. In other cases, forms of automation, from robots to phone-answering systems, have simply replaced factory workers, receptionists, and many other kinds of employees.

Now a new study co-authored by an MIT economist suggests automation has a bigger impact on the labor market and income inequality than previous research would indicate — and identifies the year 1987 as a key inflection point in this process, the moment when jobs lost to automation stopped being replaced by an equal number of similar workplace opportunities.

“Automation is critical for understanding inequality dynamics,” says MIT economist Daron Acemoglu, co-author of a newly published paper detailing the findings.

Within industries adopting automation, the study shows, the average “displacement” (or job loss) from 1947-1987 was 17 percent of jobs, while the average “reinstatement” (new opportunities) was 19 percent. But from 1987-2016, displacement was 16 percent, while reinstatement was just 10 percent. In short, those factory positions or phone-answering jobs are not coming back.

“A lot of the new job opportunities that technology brought from the 1960s to the 1980s benefitted low-skill workers,” Acemoglu adds. “But from the 1980s, and especially in the 1990s and 2000s, there’s a double whammy for low-skill workers: They’re hurt by displacement, and the new tasks that are coming, are coming slower and benefitting high-skill workers.”

The new paper, “Unpacking Skill Bias: Automation and New Tasks,” will appear in the May issue of the American Economic Association: Papers and Proceedings. The authors are Acemoglu, who is an Institute Professor at MIT, and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.

Low-skill workers: Moving backward

The new paper is one of several studies Acemoglu and Restrepo have conducted recently examining the effects of robots and automation in the workplace. In a just-published paper, they concluded that across the U.S. from 1993 to 2007, each new robot replaced 3.3 jobs.

In still another new paper, Acemoglu and Restrepo examined French industry from 2010 to 2015. They found that firms that quickly adopted robots became more productive and hired more workers, while their competitors fell behind and shed workers — with jobs again being reduced overall.

In the current study, Acemoglu and Restrepo construct a model of technology’s effects on the labor market, while testing the model’s strength by using empirical data from 44 relevant industries. (The study uses U.S. Census statistics on employment and wages, as well as economic data from the Bureau of Economic Analysis and the Bureau of Labor Studies, among other sources.)

The result is an alternative to the standard economic modeling in the field, which has emphasized the idea of “skill-biased” technological change — meaning that technology tends to benefit select high-skilled workers more than low-skill workers, helping the wages of high-skilled workers more, while the value of other workers stagnates. Think again of highly trained engineers who use new software to finish more projects more quickly: They become more productive and valuable, while workers lacking synergy with new technology are comparatively less valued.  

However, Acemoglu and Restrepo think even this scenario, with the prosperity gap it implies, is still too benign. Where automation occurs, lower-skill workers are not just failing to make gains; they are actively pushed backward financially. Moreover,  Acemoglu and Restrepo note, the standard model of skill-biased change does not fully account for this dynamic; it estimates that productivity gains and real (inflation-adjusted) wages of workers should be higher than they actually are.

More specifically, the standard model implies an estimate of about 2 percent annual growth in productivity since 1963, whereas annual productivity gains have been about 1.2 percent; it also estimates wage growth for low-skill workers of about 1 percent per year, whereas real wages for low-skill workers have actually dropped since the 1970s.

“Productivity growth has been lackluster, and real wages have fallen,” Acemoglu says. “Automation accounts for both of those.” Moreover, he adds, “Demand for skills has gone down almost exclusely in industries that have seen a lot of automation.”

Why “so-so technologies” are so, so bad

Indeed, Acemoglu says, automation is a special case within the larger set of technological changes in the workplace. As he puts it, automation “is different than garden-variety skill-biased technological change,” because it can replace jobs without adding much productivity to the economy.

Think of a self-checkout system in your supermarket or pharmacy: It reduces labor costs without making the task more efficient. The difference is the work is done by you, not paid employees. These kinds of systems are what Acemoglu and Restrepo have termed “so-so technologies,” because of the minimal value they offer.

“So-so technologies are not really doing a fantastic job, nobody’s enthusiastic about going one-by-one through their items at checkout, and nobody likes it when the airline they’re calling puts them through automated menus,” Acemoglu says. “So-so technologies are cost-saving devices for firms that just reduce their costs a little bit but don’t increase productivity by much. They create the usual displacement effect but don’t benefit other workers that much, and firms have no reason to hire more workers or pay other workers more.”

To be sure, not all automation resembles self-checkout systems, which were not around in 1987. Automation at that time consisted more of printed office records being converted into databases, or machinery being added to sectors like textiles and furniture-making. Robots became more commonly added to heavy industrial manufacturing in the 1990s. Automation is a suite of technologies, continuing today with software and AI, which are inherently worker-displacing.

“Displacement is really the center of our theory,” Acemoglu says. “And it has grimmer implications, because wage inequality is associated with disruptive changes for workers. It’s a much more Luddite explanation.”

After all, the Luddites — British textile mill workers who destroyed machinery in the 1810s — may be synonymous with technophobia, but their actions were motivated by economic concerns; they knew machines were replacing their jobs. That same displacement continues today, although, Acemoglu contends, the net negative consequences of technology on jobs is not inevitable. We could, perhaps, find more ways to produce job-enhancing technologies, rather than job-replacing innovations.

“It’s not all doom and gloom,” says Acemoglu. “There is nothing that says technology is all bad for workers. It is the choice we make about the direction to develop technology that is critical.”

System trains driverless cars in simulation before they hit the road


A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a host of worse-case scenarios before cruising down real streets.

By Rob Matheson

A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a host of worse-case scenarios before cruising down real streets.  

Control systems, or “controllers,” for autonomous vehicles largely rely on real-world datasets of driving trajectories from human drivers. From these data, they learn how to emulate safe steering controls in a variety of situations. But real-world data from hazardous “edge cases,” such as nearly crashing or being forced off the road or into other lanes, are — fortunately — rare.

Some computer programs, called “simulation engines,” aim to imitate these situations by rendering detailed virtual roads to help train the controllers to recover. But the learned control from simulation has never been shown to transfer to reality on a full-scale vehicle.

The MIT researchers tackle the problem with their photorealistic simulator, called Virtual Image Synthesis and Transformation for Autonomy (VISTA). It uses only a small dataset, captured by humans driving on a road, to synthesize a practically infinite number of new viewpoints from trajectories that the vehicle could take in the real world. The controller is rewarded for the distance it travels without crashing, so it must learn by itself how to reach a destination safely. In doing so, the vehicle learns to safely navigate any situation it encounters, including regaining control after swerving between lanes or recovering from near-crashes.  

In tests, a controller trained within the VISTA simulator safely was able to be safely deployed onto a full-scale driverless car and to navigate through previously unseen streets. In positioning the car at off-road orientations that mimicked various near-crash situations, the controller was also able to successfully recover the car back into a safe driving trajectory within a few seconds. A paper describing the system has been published in IEEE Robotics and Automation Letters and will be presented at the upcoming ICRA conference in May.

“It’s tough to collect data in these edge cases that humans don’t experience on the road,” says first author Alexander Amini, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “In our simulation, however, control systems can experience those situations, learn for themselves to recover from them, and remain robust when deployed onto vehicles in the real world.”

The work was done in collaboration with the Toyota Research Institute. Joining Amini on the paper are Igor Gilitschenski, a postdoc in CSAIL; Jacob Phillips, Julia Moseyko, and Rohan Banerjee, all undergraduates in CSAIL and the Department of Electrical Engineering and Computer Science; Sertac Karaman, an associate professor of aeronautics and astronautics; and Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science.

Data-driven simulation

Historically, building simulation engines for training and testing autonomous vehicles has been largely a manual task. Companies and universities often employ teams of artists and engineers to sketch virtual environments, with accurate road markings, lanes, and even detailed leaves on trees. Some engines may also incorporate the physics of a car’s interaction with its environment, based on complex mathematical models.

But since there are so many different things to consider in complex real-world environments, it’s practically impossible to incorporate everything into the simulator. For that reason, there’s usually a mismatch between what controllers learn in simulation and how they operate in the real world.

Instead, the MIT researchers created what they call a “data-driven” simulation engine that synthesizes, from real data, new trajectories consistent with road appearance, as well as the distance and motion of all objects in the scene.

They first collect video data from a human driving down a few roads and feed that into the engine. For each frame, the engine projects every pixel into a type of 3D point cloud. Then, they place a virtual vehicle inside that world. When the vehicle makes a steering command, the engine synthesizes a new trajectory through the point cloud, based on the steering curve and the vehicle’s orientation and velocity.

Then, the engine uses that new trajectory to render a photorealistic scene. To do so, it uses a convolutional neural network — commonly used for image-processing tasks — to estimate a depth map, which contains information relating to the distance of objects from the controller’s viewpoint. It then combines the depth map with a technique that estimates the camera’s orientation within a 3D scene. That all helps pinpoint the vehicle’s location and relative distance from everything within the virtual simulator.

Based on that information, it reorients the original pixels to recreate a 3D representation of the world from the vehicle’s new viewpoint. It also tracks the motion of the pixels to capture the movement of the cars and people, and other moving objects, in the scene. “This is equivalent to providing the vehicle with an infinite number of possible trajectories,” Rus says. “Because when we collect physical data, we get data from the specific trajectory the car will follow. But we can modify that trajectory to cover all possible ways of and environments of driving. That’s really powerful.”

Reinforcement learning from scratch

Traditionally, researchers have been training autonomous vehicles by either following human defined rules of driving or by trying to imitate human drivers. But the researchers make their controller learn entirely from scratch under an “end-to-end” framework, meaning it takes as input only raw sensor data — such as visual observations of the road — and, from that data, predicts steering commands at outputs.

“We basically say, ‘Here’s an environment. You can do whatever you want. Just don’t crash into vehicles, and stay inside the lanes,’” Amini says.

This requires “reinforcement learning” (RL), a trial-and-error machine-learning technique that provides feedback signals whenever the car makes an error. In the researchers’ simulation engine, the controller begins by knowing nothing about how  to drive, what a lane marker is, or even other vehicles look like, so it starts executing random steering angles. It gets a feedback signal only when it crashes. At that point, it gets teleported to a new simulated location and has to execute a better set of steering angles to avoid crashing again. Over 10 to 15 hours of training, it uses these sparse feedback signals to learn to travel greater and greater distances without crashing.

After successfully driving 10,000 kilometers in simulation, the authors apply that learned controller onto their full-scale autonomous vehicle in the real world. The researchers say this is the first time a controller trained using end-to-end reinforcement learning in simulation has successful been deployed onto a full-scale autonomous car. “That was surprising to us. Not only has the controller never been on a real car before, but it’s also never even seen the roads before and has no prior knowledge on how humans drive,” Amini says.

Forcing the controller to run through all types of driving scenarios enabled it to regain control from disorienting positions — such as being half off the road or into another lane — and steer back into the correct lane within several seconds. “And other state-of-the-art controllers all tragically failed at that, because they never saw any data like this in training,” Amini says.

Next, the researchers hope to simulate all types of road conditions from a single driving trajectory, such as night and day, and sunny and rainy weather. They also hope to simulate more complex interactions with other vehicles on the road. “What if other cars start moving and jump in front of the vehicle?” Rus says. “Those are complex, real-world interactions we want to start testing.”

Showing robots how to do your chores

Roboticists are developing automated robots that can learn new tasks solely by observing humans. At home, you might someday show a domestic robot how to do routine chores.
Image: Christine Daniloff, MIT

By Rob Matheson

Training interactive robots may one day be an easy job for everyone, even those without programming expertise. Roboticists are developing automated robots that can learn new tasks solely by observing humans. At home, you might someday show a domestic robot how to do routine chores. In the workplace, you could train robots like new employees, showing them how to perform many duties.

Making progress on that vision, MIT researchers have designed a system that lets these types of robots learn complicated tasks that would otherwise stymie them with too many confusing rules. One such task is setting a dinner table under certain conditions.  

At its core, the researchers’ “Planning with Uncertain Specifications” (PUnS) system gives robots the humanlike planning ability to simultaneously weigh many ambiguous — and potentially contradictory — requirements to reach an end goal. In doing so, the system always chooses the most likely action to take, based on a “belief” about some probable specifications for the task it is supposed to perform.

In their work, the researchers compiled a dataset with information about how eight objects — a mug, glass, spoon, fork, knife, dinner plate, small plate, and bowl — could be placed on a table in various configurations. A robotic arm first observed randomly selected human demonstrations of setting the table with the objects. Then, the researchers tasked the arm with automatically setting a table in a specific configuration, in real-world experiments and in simulation, based on what it had seen.

To succeed, the robot had to weigh many possible placement orderings, even when items were purposely removed, stacked, or hidden. Normally, all of that would confuse robots too much. But the researchers’ robot made no mistakes over several real-world experiments, and only a handful of mistakes over tens of thousands of simulated test runs.  

“The vision is to put programming in the hands of domain experts, who can program robots through intuitive ways, rather than describing orders to an engineer to add to their code,” says first author Ankit Shah, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro) and the Interactive Robotics Group, who emphasizes that their work is just one step in fulfilling that vision. “That way, robots won’t have to perform preprogrammed tasks anymore. Factory workers can teach a robot to do multiple complex assembly tasks. Domestic robots can learn how to stack cabinets, load the dishwasher, or set the table from people at home.”

Joining Shah on the paper are AeroAstro and Interactive Robotics Group graduate student Shen Li and Interactive Robotics Group leader Julie Shah, an associate professor in AeroAstro and the Computer Science and Artificial Intelligence Laboratory.

Bots hedging bets

Robots are fine planners in tasks with clear “specifications,” which help describe the task the robot needs to fulfill, considering its actions, environment, and end goal. Learning to set a table by observing demonstrations, is full of uncertain specifications. Items must be placed in certain spots, depending on the menu and where guests are seated, and in certain orders, depending on an item’s immediate availability or social conventions. Present approaches to planning are not capable of dealing with such uncertain specifications.

A popular approach to planning is “reinforcement learning,” a trial-and-error machine-learning technique that rewards and penalizes them for actions as they work to complete a task. But for tasks with uncertain specifications, it’s difficult to define clear rewards and penalties. In short, robots never fully learn right from wrong.

The researchers’ system, called PUnS (for Planning with Uncertain Specifications), enables a robot to hold a “belief” over a range of possible specifications. The belief itself can then be used to dish out rewards and penalties. “The robot is essentially hedging its bets in terms of what’s intended in a task, and takes actions that satisfy its belief, instead of us giving it a clear specification,” Ankit Shah says.

The system is built on “linear temporal logic” (LTL), an expressive language that enables robotic reasoning about current and future outcomes. The researchers defined templates in LTL that model various time-based conditions, such as what must happen now, must eventually happen, and must happen until something else occurs. The robot’s observations of 30 human demonstrations for setting the table yielded a probability distribution over 25 different LTL formulas. Each formula encoded a slightly different preference — or specification — for setting the table. That probability distribution becomes its belief.

“Each formula encodes something different, but when the robot considers various combinations of all the templates, and tries to satisfy everything together, it ends up doing the right thing eventually,” Ankit Shah says.

Following criteria

The researchers also developed several criteria that guide the robot toward satisfying the entire belief over those candidate formulas. One, for instance, satisfies the most likely formula, which discards everything else apart from the template with the highest probability. Others satisfy the largest number of unique formulas, without considering their overall probability, or they satisfy several formulas that represent highest total probability. Another simply minimizes error, so the system ignores formulas with high probability of failure.

Designers can choose any one of the four criteria to preset before training and testing. Each has its own tradeoff between flexibility and risk aversion. The choice of criteria depends entirely on the task. In safety critical situations, for instance, a designer may choose to limit possibility of failure. But where consequences of failure are not as severe, designers can choose to give robots greater flexibility to try different approaches.

With the criteria in place, the researchers developed an algorithm to convert the robot’s belief — the probability distribution pointing to the desired formula — into an equivalent reinforcement learning problem. This model will ping the robot with a reward or penalty for an action it takes, based on the specification it’s decided to follow.

In simulations asking the robot to set the table in different configurations, it only made six mistakes out of 20,000 tries. In real-world demonstrations, it showed behavior similar to how a human would perform the task. If an item wasn’t initially visible, for instance, the robot would finish setting the rest of the table without the item. Then, when the fork was revealed, it would set the fork in the proper place. “That’s where flexibility is very important,” Ankit Shah says. “Otherwise it would get stuck when it expects to place a fork and not finish the rest of table setup.”

Next, the researchers hope to modify the system to help robots change their behavior based on verbal instructions, corrections, or a user’s assessment of the robot’s performance. “Say a person demonstrates to a robot how to set a table at only one spot. The person may say, ‘do the same thing for all other spots,’ or, ‘place the knife before the fork here instead,’” Ankit Shah says. “We want to develop methods for the system to naturally adapt to handle those verbal commands, without needing additional demonstrations.”  

Page 5 of 12
1 3 4 5 6 7 12