As 3-D printing has become a mainstream technology, industry and academic researchers have been investigating printable structures that will fold themselves into useful three-dimensional shapes when heated or immersed in water.
In a paper appearing in the American Chemical Society’s journal Applied Materials and Interfaces, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and colleagues report something new: a printable structure that begins to fold itself up as soon as it’s peeled off the printing platform.
One of the big advantages of devices that self-fold without any outside stimulus, the researchers say, is that they can involve a wider range of materials and more delicate structures.
“If you want to add printed electronics, you’re generally going to be using some organic materials, because a majority of printed electronics rely on them,” says Subramanian Sundaram, an MIT graduate student in electrical engineering and computer science and first author on the paper. “These materials are often very, very sensitive to moisture and temperature. So if you have these electronics and parts, and you want to initiate folds in them, you wouldn’t want to dunk them in water or heat them, because then your electronics are going to degrade.”
To illustrate this idea, the researchers built a prototype self-folding printable device that includes electrical leads and a polymer “pixel” that changes from transparent to opaque when a voltage is applied to it. The device, which is a variation on the “printable goldbug” that Sundaram and his colleagues announced earlier this year, starts out looking something like the letter “H.” But each of the legs of the H folds itself in two different directions, producing a tabletop shape.
The researchers also built several different versions of the same basic hinge design, which show that they can control the precise angle at which a joint folds. In tests, they forcibly straightened the hinges by attaching them to a weight, but when the weight was removed, the hinges resumed their original folds.
In the short term, the technique could enable the custom manufacture of sensors, displays, or antennas whose functionality depends on their three-dimensional shape. Longer term, the researchers envision the possibility of printable robots.
Sundaram is joined on the paper by his advisor, Wojciech Matusik, an associate professor of electrical engineering and computer science (EECS) at MIT; Marc Baldo, also an associate professor of EECS, who specializes in organic electronics; David Kim, a technical assistant in Matusik’s Computational Fabrication Group; and Ryan Hayward, a professor of polymer science and engineering at the University of Massachusetts at Amherst.
This clip shows an example of an accelerated fold. (Image: Tom Buehler/CSAIL)
Stress relief
The key to the researchers’ design is a new printer-ink material that expands after it solidifies, which is unusual. Most printer-ink materials contract slightly as they solidify, a technical limitation that designers frequently have to work around.
Printed devices are built up in layers, and in their prototypes the MIT researchers deposit their expanding material at precise locations in either the top or bottom few layers. The bottom layer adheres slightly to the printer platform, and that adhesion is enough to hold the device flat as the layers are built up. But as soon as the finished device is peeled off the platform, the joints made from the new material begin to expand, bending the device in the opposite direction.
Like many technological breakthroughs, the CSAIL researchers’ discovery of the material was an accident. Most of the printer materials used by Matusik’s Computational Fabrication Group are combinations of polymers, long molecules that consist of chainlike repetitions of single molecular components, or monomers. Mixing these components is one method for creating printer inks with specific physical properties.
While trying to develop an ink that yielded more flexible printed components, the CSAIL researchers inadvertently hit upon one that expanded slightly after it hardened. They immediately recognized the potential utility of expanding polymers and began experimenting with modifications of the mixture, until they arrived at a recipe that let them build joints that would expand enough to fold a printed device in half.
Whys and wherefores
Hayward’s contribution to the paper was to help the MIT team explain the material’s expansion. The ink that produces the most forceful expansion includes several long molecular chains and one much shorter chain, made up of the monomer isooctyl acrylate. When a layer of the ink is exposed to ultraviolet light — or “cured,” a process commonly used in 3-D printing to harden materials deposited as liquids — the long chains connect to each other, producing a rigid thicket of tangled molecules.
When another layer of the material is deposited on top of the first, the small chains of isooctyl acrylate in the top, liquid layer sink down into the lower, more rigid layer. There, they interact with the longer chains to exert an expansive force, which the adhesion to the printing platform temporarily resists.
The researchers hope that a better theoretical understanding of the reason for the material’s expansion will enable them to design material tailored to specific applications — including materials that resist the 1–3 percent contraction typical of many printed polymers after curing.
“This work is exciting because it provides a way to create functional electronics on 3-D objects,” says Michael Dickey, a professor of chemical engineering at North Carolina State University. “Typically, electronic processing is done in a planar, 2-D fashion and thus needs a flat surface. The work here provides a route to create electronics using more conventional planar techniques on a 2-D surface and then transform them into a 3-D shape, while retaining the function of the electronics. The transformation happens by a clever trick to build stress into the materials during printing.”
The NTSB (National Transportation Safety Board) has released a preliminary report on the fatal Tesla crash with the full report expected later this week. The report is much less favourable to autopilots than their earlier evaluation.
(This is a giant news day for Robocars. Today NHTSA also released their new draft robocar regulations which appear to be much simpler than the earlier 116 page document that I was very critical of last year. It’s a busy day, so I will be posting a more detailed evaluation of the new regulations — and the proposed new robocar laws from the House — later in the week.)
The earlier NTSB report indicated that though the autopilot had its flaws, overall the system was working. This is to say that though drivers were misusing the autopilot, the combined system including drivers not misusing the autopilot combined with those who did, was overall safer than drivers with no autopilot. The new report makes it clear that this does not excuse the autopilot being so easy to abuse. (By abuse, I mean ignore the warnings and treat it like a robocar, letting it driving you without actively monitoring the road, ready to take control.)
While the report mostly faults the truck driver for turning at the wrong time, it blames Tesla for not doing a good enough job to assure that the driver is not abusing the autopilot. Tesla makes you touch the wheel every so often, but NTSB notes that it is possible to touch the wheel without actually looking at the road. NTSB also is concerned that the autopilot can operate in this fashion even on roads it was not designed for. They note that Tesla has improved some of these things since the accident.
This means that “touch the wheel” systems will probably not be considered acceptable in the future, and there will have to be some means of assuring the driver is really paying attention. Some vendors have decided to put in cameras that watch the driver or in particular the driver’s eyes to check for attention. After the Tesla accident, I proposed a system which tested driver attention from time to time and punished them if they were not paying attention which could do the job without adding new hardware.
It also seems that autopilot cars will need to have maps of what roads they work on and which they don’t, and limit features based on the type of road you’re on.
Summer is not without its annoyances — mosquitos, wasps, and ants, to name a few. As the cool breeze of September pushes us back to work, labs across the country are reconvening tackling nature’s hardest problems. Sometimes forces that seem diametrically opposed come together in beautiful ways, like robotics infused into living organisms.
This past summer, researchers at Harvard and Arizona State University collaborated on successfully turning living E. Coli bacteria into a cellular robot, called a “ribocomputer.” By taking archived footage of movies, the Harvard scientists were able to successfully store the digital content on the bacteria that is most famous for making Chipotle customers violently ill. According to Seth Shipman, lead researcher at Harvard, this was the first time anyone has archived data onto a living organism.
In responding to the original article published in July in Nature, Julius Lucks, a bioengineer at Northwestern University, said that Shipman’s discovery will enable wider exploitation of DNA encoding. “What these papers represent is just how good we are getting at harnessing that power,” explained Lucks. The key to the discovery was Shipman’s ability to disguise the movie pixels into DNA’s four letter code: “molecules represented by the letters A,T,G and C—and synthesized that DNA. But instead of generating one long strand of code, they arranged it, along with other genetic elements, into short segments that looked like fragments of viral DNA.” Another important factor was E.coli‘s natural ability “to grab errant pieces of viral DNA and store them in its own genome—a way of keeping a chronological record of invaders. So when the researchers introduced the pieces of movie-turned-synthetic DNA—disguised as viral DNA—E. coli’s molecular machinery grabbed them and filed them away.”
Shipman used this methodology to eventually turn the cells into a computer that not only stores data, but actually perform logic-based decisions. Partnering with Alexander Green at Arizona State University’s Biodesign Institute, the two institutions collaborated on building their ribocomoputer which programmed bacteria with ribonucleic acid or RNA. According to Green, the “ribocomputer can evaluate up to a dozen inputs, make logic-based decisions using AND, OR, and NOT operations, and give the cell commands.” Green stated that this is the most complex biological computer created on a living cell to date. The discovery by Green and Shipman means that cells could now be programmed to self-destruct if they sense the presence of cancer markers, or even heal the body from within by attacking foreign toxins.
Timothy Lu of MIT, called the discovery the beginning of the “golden age of circuit design.” Lu further said “The way that electrical engineers have gone about establishing design hierarchy or abstraction layers — I think that’s going to be really important for biology. ” In a recent IEEE article, Lucks cautioned readers about the discovery of perverting nature which can ultimately lead to a host of ethical considerations, “I don’t think anybody would really argue that it’s unethical to do this in E. coli. But as you go up in the chain [of living organisms], it gets more interesting from an ethical point of view.”
Nature has the inspiration for numerous discoveries in modern robotics, and has even created its own field of biomimicry. However, manipulating living organisms according to the whims of humans is just beginning to take shape. A couple of years ago, Hong Liang, a researcher at Texas A&M University, outfitted a cockroach with 3g backpack-like device that had a microprocessor, lithium battery, camera sensor, and electrical/nerve control system. Liang then used her make-shift insect robo-suit to remotely drive the waterbug through a maze.
When asked by the Guardian, what prompted Laing to utilize bugs as robots, she explained, “Insects can do things a robot cannot. They can go into small places, sense the environment, and if there’s movement, from a predator say, they can escape much better than a system designed by a human. We wanted to find ways to work with them.”
Liang believes that robo-roaches could be especially useful in disaster recovery situations that maximize the size of the insect along with its endurance. Liang says that some cockroaches can carry five times their own bodyweight, but the heavier the load, the greater the toll it takes on their performance. “We did an endurance test and they do get tired,” Liang explained. “We put them on a treadmill for a minute and then let them rest. If the backpack is lighter, they can go on for longer.” Laing has inspired other labs to work with different species of insects.
Draper, the US defense contractor, is working on its own insect robot by turning live dragonflies into controllable undetected drones. The DragonflEye Project is a deviation from the technique developed by Laing, as it uses light to steer neurons instead of electrical nerve stimulation. According to Jesse Wheeler, the project lead for Draper, he says that this methodology is like “a joystick that tells the system how to coordinate flight activities.” Through Wheeler’s “joystick” he is able to control and steer the wings inflight and program coordinates to the bug for mission directions via his own attached micro backpack that includes a guidance system, solar energy cells, navigation cells, and optical stimulation.
Draper believes that swarms of digitally enhanced insects might hold the key to national defense as locusts and bees have been programmed to identify scents, such as chemical explosives. The critters could be eventually programmed to collect and analyze samples for homeland security, in addition to obvious surveillance opportunities. Liang boasts that her cyborg roaches are “more versatile and flexible, and they require less control,” than traditional robots. However, Liang also reminds us that “they’re more real” as ultimately living organisms even with mechanical backpacks are not machines.
Author’s note: This topic and more will be discussed at our next RobotLabNYC event in one week on September 19th at 6pm, “Investing In Unmanned Systems,” with experts from NASA, AUVSI, and Genius NY.
This summer, a survey released by the American Automobile Association showed that 78 percent of Americans feared riding in a self-driving car, with just 19 percent trusting the technology. What might it take to alter public opinion on the issue? Iyad Rahwan, the AT&T Career Development Professor in the MIT Media Lab, has studied the issue at length, and, along with Jean-Francois Bonnefon of the Toulouse School of Economics and Azim Shariff of the University of California at Irvine, has authored a new commentary on the subject, titled, “Psychological roadblocks to the adoption of self-driving vehicles,” published today in Nature Human Behavior. Rahwan spoke to MIT News about the hurdles automakers face if they want greater public buy-in for autonomous vehicles.
Q: Your new paper states that when it comes to autonomous vehicles, trust “will determine how widely they are adopted by consumers, and how tolerated they are by everyone else.” Why is this?
A: It’s a new kind of agent in the world. We’ve always built tools and had to trust that technology will function in the way it was intended. We’ve had to trust that the materials are reliable and don’t have health hazards, and that there are consumer protection entities that promote the interests of consumers. But these are passive products that we choose to use. For the first time in history we are building objects that are proactive and have autonomy and are even adaptive. They are learning behaviors that may be different from the ones they were originally programmed for. We don’t really know how to get people to trust such entities, because humans don’t have mental models of what these entities are, what they’re capable of, how they learn.
Before we can trust machines like autonomous vehicles, we have a number of challenges. The first is technical: the challenge of building an AI [artificial intelligence] system that can drive a car. The second is legal and regulatory: Who is liable for different kinds of faults? A third class of challenges is psychological. Unless people are comfortable putting their lives in the hands of AI, then none of this will matter. People won’t buy the product, the economics won’t work, and that’s the end of the story. What we’re trying to highlight in this paper is that these psychological challenges have to be taken seriously, even if [people] are irrational in the way they assess risk, even if the technology is safe and the legal framework is reliable.
Q: What are the specific psychological issues people have with autonomous vehicles?
A: We classify three psychological challenges that we think are fairly big. One of them is dilemmas: A lot of people are concerned about how autonomous vehicles will resolve ethical dilemmas. How will they decide, for example, whether to prioritize safety for the passenger or safety for pedestrians? Should this influence the way in which the car makes a decision about relative risk? And what we’re finding is that people have an idea about how to solve this dilemma: The car should just minimize harm. But the problem is that people are not willing to buy such cars, because they want to buy cars that will always prioritize themselves.
A second one is that people don’t always reason about risk in an unbiased way. People may overplay the risk of dying in a car crash caused by an autonomous vehicle even if autonomous vehicles are, on the average, safer. We’ve seen this kind of overreaction in other fields. Many people are afraid of flying even though you’re incredibly less likely to die from a plane crash than a car crash. So people don’t always reason about risk.
The third class of psychological challenges is this idea that we don’t always have transparency about what the car is thinking and why it’s doing what it’s doing. The carmaker has better knowledge of what the car thinks and how it behaves … which makes it more difficult for people to predict the behavior of autonomous vehicles, which can also dimish trust. One of the preconditions of trust is predictability: If I can trust that you will behave in a particular way, I can behave according to that expectation.
Q: In the paper you state that autonomous vehicles are better depicted “as being perfected, not as perfect.” In essence, is that your advice to the auto industry?
A: Yes, I think setting up very high expectations can be a recipe for disaster, because if you overpromise and underdeliver, you get in trouble. That is not to say that we should underpromise. We should just be a bit realistic about what we promise. If the promise is an improvement on the current status quo, that is, a reduction in risk to everyone, both pedestrians as well as passengers in cars, that’s an admirable goal. Even if we achieve it in a small way, that’s already progress that we should take seriously. I think being transparent about that, and being transparent about the progress being made toward that goal, is crucial.
In episode eight of season three we return to the epic (or maybe not so epic) clash between frequentists and bayesians, take a listener question about the ethical questions generators of machine learning should be asking of themselves (not just their tools) and we hear a conversation with Ernest Mwebaze of Makerere University.
If you enjoyed this episode, you may also want to listen to:
IBM and MIT today announced that IBM plans to make a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT. The lab will carry out fundamental artificial intelligence (AI) research and seek to propel scientific breakthroughs that unlock the potential of AI. The collaboration aims to advance AI hardware, software, and algorithms related to deep learning and other areas; increase AI’s impact on industries, such as health care and cybersecurity; and explore the economic and ethical implications of AI on society. IBM’s $240 million investment in the lab will support research by IBM and MIT scientists.
The new lab will be one of the largest long-term university-industry AI collaborations to date, mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab in Cambridge, Massachusetts — co-located with the IBM Watson Health and IBM Security headquarters in Kendall Square — and on the neighboring MIT campus.
The lab will be co-chaired by Dario Gil, IBM Research VP of AI and IBM Q, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. (Read a related Q&A with Chandrakasan.) IBM and MIT plan to issue a call for proposals to MIT researchers and IBM scientists to submit their ideas for joint research to push the boundaries in AI science and technology in several areas, including:
AI algorithms: Developing advanced algorithms to expand capabilities in machine learning and reasoning. Researchers will create AI systems that move beyond specialized tasks to tackle more complex problems and benefit from robust, continuous learning. Researchers will invent new algorithms that can not only leverage big data when available, but also learn from limited data to augment human intelligence.
Physics of AI: Investigating new AI hardware materials, devices, and architectures that will support future analog computational approaches to AI model training and deployment, as well as the intersection of quantum computing and machine learning. The latter involves using AI to help characterize and improve quantum devices, and researching the use of quantum computing to optimize and speed up machine-learning algorithms and other AI applications.
Application of AI to industries: Given its location in IBM Watson Health and IBM Security headquarters in Kendall Square, a global hub of biomedical innovation, the lab will develop new applications of AI for professional use, including fields such as health care and cybersecurity. The collaboration will explore the use of AI in areas such as the security and privacy of medical data, personalization of health care, image analysis, and the optimum treatment paths for specific patients.
Advancing shared prosperity through AI: The MIT–IBM Watson AI Lab will explore how AI can deliver economic and societal benefits to a broader range of people, nations, and enterprises. The lab will study the economic implications of AI and investigate how AI can improve prosperity and help individuals achieve more in their lives.
In addition to IBM’s plan to produce innovations that advance the frontiers of AI, a distinct objective of the new lab is to encourage MIT faculty and students to launch companies that will focus on commercializing AI inventions and technologies that are developed at the lab. The lab’s scientists also will publish their work, contribute to the release of open source material, and foster an adherence to the ethical application of AI.
“The field of artificial intelligence has experienced incredible growth and progress over the past decade. Yet today’s AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives,” says John Kelly III, IBM senior vice president, Cognitive Solutions and Research. “The extremely broad and deep technical capabilities and talent at MIT and IBM are unmatched, and will lead the field of AI for at least the next decade.”
“I am delighted by this new collaboration,” MIT President L. Rafael Reif says. “True breakthroughs are often the result of fresh thinking inspired by new kinds of research teams. The combined MIT and IBM talent dedicated to this new effort will bring formidable power to a field with staggering potential to advance knowledge and help solve important challenges.”
Both MIT and IBM have been pioneers in artificial intelligence research, and the new AI lab builds on a decades-long research relationship between the two. In 2016, IBM Research announced a multiyear collaboration with MIT’s Department of Brain and Cognitive Sciences to advance the scientific field of machine vision, a core aspect of artificial intelligence. The collaboration has brought together leading brain, cognitive, and computer scientists to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision. In addition, IBM and the Broad Institute of MIT and Harvard have established a five-year, $50 million research collaboration on AI and genomics.
MIT researchers were among those who helped coin and popularize the very phrase “artificial intelligence” in the 1950s. MIT pushed several major advances in the subsequent decades, from neural networks to data encryption to quantum computing to crowdsourcing. Marvin Minsky, a founder of the discipline, collaborated on building the first artificial neural network and he, along with Seymour Papert, advanced learning algorithms. Currently, the Computer Science and Artificial Intelligence Laboratory, the Media Lab, the Department of Brain and Cognitive Sciences, and the MIT Institute for Data, Systems, and Society serve as connected hubs for AI and related research at MIT.
For more than 20 years, IBM has explored the application of AI across many areas and industries. IBM researchers invented and built Watson, which is a cloud-based AI platform being used by businesses, developers, and universities to fight cancer, improve classroom learning, minimize pollution, enhance agriculture and oil and gas exploration, better manage financial investments, and much more. Today, IBM scientists across the globe are working on fundamental advances in AI algorithms, science and technology that will pave the way for the next generation of artificially intelligent systems.
For information about employment opportunities with IBM at the new AI Lab, please visit MITIBMWatsonAILab.mit.edu.
Aerospace conglomerate United Technologies is paying $30 billion to acquire Rockwell Collins in a deal that creates one of the world’s largest makers of civilian and defense aircraft components. Rockwell Collins and United’s Aerospace Systems segment will combine to create a new business unit named Collins Aerospace Systems.
United Technologies will pay $140 per share for Rockwell Collins shares; $93.33 in cash and $46.67 in stock. The $140 price represents a 17.6% premium for Rockwell shareholders.
“This acquisition adds tremendous capabilities to our aerospace businesses and strengthens our complementary offerings of technologically advanced aerospace systems,” said UTC’s chairman and CEO, Greg Hayes.
Both companies have subsidiaries involved in robotics, drones and marine systems but both derive most of their revenue from civilian and defense aerospace.
United Technologies includes Otis elevators, escalators and moving walkways; Pratt & Whitney designs and manufactures military and commercial engines, power units and turbojet products; Carrier heating, air-conditioning and refrigeration products; Chubb security and fire-safety solutions; Kidde smoke alarms and fire safety technology; and UTC aerospace systems which provide aircraft interiors, space and ISR systems, landing gear and sensors and sensor-based systems for everything from ice detection to guidance and navigation. Their Aerospace Systems unit has a wide range of products for multiple unmanned platforms including unmanned underwater vehicles (UUVs).
Rockwell Collins (not to be confused with (or involved in this acquisition) Rockwell Automation* which is highly involved in robotics) designs and produces electronic communications, avionics and in-flight entertainment systems for commercial, military and government customers and includes navigation and display systems for unmanned commercial and military vehicles. Their electronics are installed in nearly every airline cockpit in the world. Their helmet mounted display systems and in-car head-up displays are also big revenue producers.
According to Reuters, “The deal also follows a wave of consolidation among smaller aerospace manufacturers in recent years that was caused in part by the need to invest in new technologies such as metal 3-D printing and connected factories to stay competitive. A combined United Technologies and Rockwell Collins could similarly invest, and their broad portfolios have little overlap.”
________________
Rockwell Automation is one of 83 members of the ROBO-Global Robotics & Automation Index of which I am a co-founder. The index is designed to leverage advances in technology and macro economic drivers to capture growth opportunity from robotics and automation.
How will robocars fare in a disaster, like Harvey in Houston, Irma, or the tsunamis in Japan or Indonesia, or a big Earthquake, or a fire, or 9/11, or a war?
These are very complex questions, and certainly most teams developing cars have not spent a lot of time on solutions to them at present. Indeed, I expect that these will not be solved issues until after the first significant pilot projects are deployed, because as long as robocars are a small fraction of the car population, they will not have that much effect on how things go. Some people who have given up car ownership for robocars — not that many in the early days — will possibly find themselves hunting for transportation the way other people who don’t own cars do today.
It’s a different story when, perhaps a decade from now, we get significant numbers of people who don’t own cars and rely on robocar transportation. That means people who don’t have any cars, not the larger number of people who have dropped from 2 cars to 1 thanks to robocar services.
I addressed a few of these questions before regarding Tsunamis and Earthquakes.
A few key questions should be addressed:
How will the car fleets deal with massively increased demand during evacuations and flight during an emergency?
How will the cars deal with shutdown and overload of the mobile data networks, if it happens?
How will cars deal with things like floods, storms, earthquakes and more which block roads or make travel unsafe on certain roads?
Most of these issues revolve around fleets. Privately owned robocars will tend to have steering wheels and be usable as regular cars, and so only improve the situation. If they encounter unsafe roads, they will ask their passengers for guidance, or full driving. (However, in a few decades, their passengers may no longer be very capable at driving but the car will handle the hard parts and leave them just to provide video-game style directions.)
Increased demand
An immediately positive thing is the potential ability for private robocars to, once they have taken their owners to safety, drive back into the evacuation zone as temporary fleet cars, and fetch other people, starting with those selected by the car’s owner, but also members of the public needing assistance. This should dramatically increase the ability of the car fleet to get people moved.
Nonetheless, it is often noted that in a robocar taxi world, there don’t need to be nearly as many cars in a city as we have today. With ideal efficiency, there would be exactly enough seats to handle the annual peak, but few more. We might drop to just 1/4 of the cars, and we might also have many of them be only 1 or 2 seater cars. There will be far fewer SUVs, pickup trucks, minivans and other large cars, because we don’t really need nearly as many as we have today.
To counter this, mandatory carpooling may become required. This will be fought because it means you don’t get to take all the physical stuff you want to bring in the event of something like flooding. Worse, we might see conflict between people wanting to bring pets (in carriers) which could take seats which might be used by people. In a very urgent situation, we could see an order coming down requiring pets to be left behind lest people be left behind. Once it’s clear all the people will make it out, people or rescue workers could go back for the pets, but that’s not very tenable.
One solution, if there is enough warning of the disaster (as there is for storms but not for other disasters) is for cars from outside the region to be pressed into service. There may be millions of cars within 1-2 hours drive of a disaster zone, allowing a major increase in capacity within a short time. This would include private cars as well as taxi fleets from other cities. Those cities would face a reduction in service and more carpooling. As I write this, Irma is heading to Florida but it is uncertain where. Fleets of cars could, in such a situation, be on their way from other states, distributing themselves along the probable path, and improving the position as the path is better known. They could be ready to do a very quick mass evacuation once the forecast is certain, and return home if nothing happens. For efficiency they could also drive themselves to be placed on car carriers and trains.
Disaster management agencies will need to build tools that calculate how many people need to be moved, and what capacity exists to move them. This will let them calculate how much excess capacity is there to move pets and household possessions.
If there are robotic transit vehicles they could help a lot. Cars might ferry people from homes to stations where robotic buses (including those from other cities, and human driven buses) could carry lots of people. Then they could return, with no risk to a driver.
The last hopeful item is the ability to do better traffic management. That’s an issue in any disaster, as people will resist following rules. If people get used to the idea of line direction reassignment, it can do a lot here. Major routes could be changed to have only one lane going towards the disaster area. That lane would be robocar or emergency vehicle only. The next lane would be going out of the disaster area, and it would be strictly robocar only. The robocars could safety drive in opposite directions at high speed without a barrier, and they could provide a buffer for the human driven cars in all the other lanes. One simple solution might be to have the inbound lanes converted to robocar only, with allocation of lanes based on traffic demand. The outbound lanes would remain outbound and have a mix of cars. The buses would use the robocar lanes for a congestion free quick trip out and in.
Saving cars
It is estimated that as many as a million cars were destroyed by flooding in Harvey. Fortunately, robocars could be told to wake up and leave, even if not pressed into service for evacuation. With good knowledge of the topography of the land, they can also calculate the depth of water by looking at the shape of a flood, and never drive where they could get stuck. Few cars would be destroyed by the flood.
Loss of data
We’ve seen data networks fail. Cars need the data networks to get orders to travel to destinations to pick people up. They also want updates on road conditions, closures, problems and reallocations.
This is one of the few places where DSRC could help, as it would not depend on the mobile data networks. But it’s not enough to justify this rare purpose, and mesh networking is not currently in its design. It is probably more effective to build tools to keep the mobile data networks up, such as a network of emergency cell towers mounted in robotic trucks (and boats and planes?) that could travel quickly to provide limited service, for use by vehicles and for emergency communications only. Keep people to text messages and the networks have tons of capacity.
Existing cell towers could also be hardened, to have at least enough backup power for an evacuation, if not for a long disaster.
Roads changed by disasters
You can probably imagine 1,000 strange things that could happen during a disaster. Flooded streets. Trees and powerlines down. Buildings collapsed. Cracks in the road. Washed out bridges. Approaching tsunamis. High winds. Crashed cars.
The good thing is, if you can imagine it, so can the teams building test systems for robocars. They are building simulators to play out every strange situation they can think of, or that they’ve ever encountered in many human lifetimes of real driving on the road. Every car then gets tested to see what it will do in those strange situations, and 1,000 variations of each situation.
Cars could know the lay of the land, so that they could predict how deep water is or where flooding will go. They could know where the high ground is and how to get there without going over the low ground. If the data networks are up, they could get information in real time on road problems and disaster situations. One car might run into trouble, but after that, no other car will go that way if they shouldn’t. This even applies to traffic, something we already do with tools like Waze.
War
War is one of the most difficult challenges. Roads will be blocked or changed. Certainly places will be extremely dangerous and must not be visited. Checkpoints will be created that you must stop for. Communications networks will be compromised. Parties may be attempting to break into your car to take it over and turn it into a weapon against you or others. Insurgents may be modifying robocars and even ordinary drive-by-wire cars to turn them into bomb delivery systems. Cars or streets may come under active attack from snipers, artillery, grenade throwers and more. In the most extreme case, a nuclear weapon or chemical weapon might be used.
The military wants autonomous vehicles. It wants them to move cargo in dangerous but not active war zones, and it wants them for battle. It will be dealing with all these problems, but there is no clear path from their plans to civilian vehicles. Most civilian developers will not consider war situations very heavily until they start wanting to sell cars for use in conflict zones. At first the primarily solution will be to have a steering wheel to allow manual control. The second approach will be what I call “video game mode” where you can drive the car with a video game controller. It will take charge of not hitting things, you will just tell it where to go — what turns to make, what side of an obstacle to drive on, and most scare of all, to override its own sensors which believe it can’t go forward because of an obstacle.
In a conflict zone, communications will become very suspect and unreliable. No operations can depend on communications, and all communications should be seen as possible vectors for attack. At the same time you need data about places to avoid — and I mean really avoid. This problem needs lots more thought, and for now, I don’t know of people thinking about robotaxi service in conflict zones.
Race Avoidance in the Development of Artificial General Intelligence
Olga Afanasjeva, Jan Feyereisl, Marek Havrda, Martin Holec, Seán Ó hÉigeartaigh, Martin Poliak
SUMMARY
◦ Promising strides are being made in research towards artificial general intelligence systems. This progress might lead to an apparent winner-takes-all race for AGI.
◦ Concerns have been raised that such a race could create incentives to skimp on safety and to defy established agreements between key players.
◦ The AI Roadmap Institute held a workshop to begin interdisciplinary discussion on how to avoid scenarios where such dangerous race could occur.
◦ The focus was on scoping the problem, defining relevant actors, and visualizing possible scenarios of the AI race through example roadmaps.
◦ The workshop was the first step in preparation for the AI Race Avoidance round of the General AI Challenge that aims to tackle this difficult problem via citizen science and promote AI safety research beyond the boundaries of the small AI safety community.
Scoping the problem
With the advent of artificial intelligence (AI) in most areas of our lives, the stakes are increasingly becoming higher at every level. Investments into companies developing machine intelligence applications are reaching astronomical amounts. Despite the rather narrow focus of most existing AI technologies, the extreme competition is real and it directly impacts the distribution of researchers among research institutions and private enterprises.
With the goal of artificial general intelligence (AGI) in sight, the competition on many fronts will become acute with potentially severe consequences regarding the safety of AGI.
The first general AI system will be disruptive and transformative. First-mover advantage will be decisive in determining the winner of the race due to the expected exponential growth in capabilities of the system and subsequent difficulty of other parties to catch up. There is a chance that lengthy and tedious AI safety work ceases being a priority when the race is on. The risk of AI-related disaster increases when developers do not devote the attention and resources to safety of such a powerful system [1].
Once this Pandora’s box is opened, it will be hard to close. We have to act before this happens and hence the question we would like to address is:
How can we avoid general AI research becoming a race between researchers, developers and companies, where AI safety gets neglected in favor of faster deployment of powerful, but unsafe general AI?
Motivation for this post
As a community of AI developers, we should strive to avoid the AI race. Some work has been done on this topic in the past [1,2,3,4,5], but the problem is largely unsolved. We need to focus the efforts of the community to tackle this issue and avoid a potentially catastrophic scenario in which developers race towards the first general AI system while sacrificing safety of humankind and their own.
This post marks “step 0” that we have taken to tackle the issue. It summarizes the outcomes of a workshop held by the AI Roadmap Institute on 29th May 2017, at GoodAI head office in Prague, with the participation of Seán Ó hÉigeartaigh (CSER), Marek Havrda, Olga Afanasjeva, Martin Poliak (GoodAI), Martin Holec (KISK MUNI) and Jan Feyereisl (GoodAI & AI Roadmap Institute). We focused on scoping the problem, defining relevant actors, and visualizing possible scenarios of the AI race.
This workshop is the first in a series held by the AI Roadmap Institute in preparation for the AI Race Avoidance round of the General AI Challenge (described at the bottom of this page and planned to launch in late 2017). Posing the AI race avoidance problem as a worldwide challenge is a way to encourage the community to focus on solving this problem, explore this issue further and ignite interest in AI safety research.
By publishing the outcomes of this and the future workshops, and launching the challenge focused on AI race avoidance, we would like to promote AI safety research beyond the boundaries of the small AI safety community.
The issue should be subject to a wider public discourse, and should benefit from cross-disciplinary work of behavioral economists, psychologists, sociologists, policy makers, game theorists, security experts, and many more. We believe that transparency is essential part of solving many of the world’s direst problems and the AI race is no exception. This in turn may reduce regulation over-shooting and unreasonable political control that could hinder AI research.
Proposed line of thinking about the AI race: Example Roadmaps
One approach for starting to tackle the issue of AI race avoidance, and laying down the foundations for thorough discussion, is the creation of concrete roadmaps that outline possible scenarios of the future. Scenarios can be then compared, and mitigation strategies for negative futures can be suggested.
We used two simple methodologies for creating example roadmaps:
Methodology 1: a simple linear development of affairs is depicted by various shapes and colors representing the following notions: state of affairs, key actor, action, risk factor. The notions are grouped around each state of affairs in order to illustrate principal relevant actors, actions and risk factors.
Methodology 2: each node in a roadmap represents a state, and each link, or transition, represents a decision-driven action by one of the main actors (such as a company/AI developer, government, rogue actor, etc.)
During the workshop, a number of important issues were raised. For example, the need to distinguish different time-scales for which roadmaps can be created, and different viewpoints (good/bad scenario, different actor viewpoints, etc.)
Timescale issue
Roadmapping is frequently a subjective endeavor and hence multiple approaches towards building roadmaps exist. One of the first issues that was encountered during the workshop was with respect to time variance. A roadmap created with near-term milestones in mind will be significantly different from long-term roadmaps, nevertheless both timelines are interdependent. Rather than taking an explicit view on short-/long-term roadmaps, it might be beneficial considering these probabilistically. For example, what roadmap can be built, if there was a 25% chance of general AI being developed within the next 15 years and 75% chance of achieving this goal in 15–400 years?
Considering the AI race at different temporal scales is likely to bring about different aspects which should be focused on. For instance, each actor might anticipate different speed of reaching the first general AI system. This can have a significant impact on the creation of a roadmap and needs to be incorporated in a meaningful and robust way. For example, the Boy Who Cried Wolf situation can decrease the established trust between actors and weaken ties between developers, safety researchers, and investors. This in turn could result in the decrease of belief in developing the first general AI system at the appropriate time. For example, a low belief of fast AGI arrival could result in miscalculating the risks of unsafe AGI deployment by a rogue actor.
Furthermore, two apparent time “chunks” have been identified that also result in significantly different problems that need to be solved. Pre- and Post-AGI era, i.e. before the first general AI is developed, compared to the scenario after someone is in possession of such a technology.
In the workshop, the discussion focused primarily on the pre-AGI era as the AI race avoidance should be a preventative, rather than a curative effort. The first example roadmap (figure 1) presented here covers the pre-AGI era, while the second roadmap (figure 2), created by GoodAI prior to the workshop, focuses on the time around AGI creation.
Viewpoint issue
We have identified an extensive (but not exhaustive) list of actors that might participate in the AI race, actions taken by them and by others, as well as the environment in which the race takes place, and states in between which the entire process transitions. Table 1 outlines the identified constituents. Roadmapping the same problem from various viewpoints can help reveal new scenarios and risks.
Modelling and investigating decision dilemmas of various actors frequently led to the fact that cooperation could proliferate applications of AI safety measures and lessen the severity of race dynamics.
Cooperation issue
Cooperation among the many actors, and spirit of trust and cooperation in general, is likely to decrease the race dynamics in the overall system. Starting with a low-stake cooperation among different actors, such as talent co-development or collaboration among safety researchers and industry, should allow for incremental building of trust and better understanding of faced issues.
Active cooperation between safety experts and AI industry leaders, including cooperation between different AI developing companies on the questions of AI safety, for example, is likely to result in closer ties and in a positive information propagation up the chain, leading all the way to regulatory levels. Hands-on approach to safety research with working prototypes is likely to yield better results than theoretical-only argumentation.
One area that needs further investigation in this regard are forms of cooperation that might seem intuitive, but might rather reduce the safety of AI development [1].
Finding incentives to avoid the AI race
It is natural that any sensible developer would want to prevent their AI system from causing harm to its creator and humanity, whether it is a narrow AI or a general AI system. In case of a malignant actor, there is presumably a motivation at least not to harm themselves.
When considering various incentives for safety-focused development, we need to find a robust incentive (or a combination of such) that would push even unknown actors towards beneficial A(G)I, or at least an A(G)I that can be controlled [6].
Tying timescale and cooperation issues together
In order to prevent a negative scenario from happening, it should be beneficial to tie the different time-horizons (anticipated speed of AGI’s arrival) and cooperation together. Concrete problems in AI safety (interpretability, bias-avoidance, etc.) [7] are examples of practically relevant issues that need to be dealt with immediately and collectively. At the same time, the very same issues are related to the presumably longer horizon of AGI development. Pointing out such concerns can promote AI safety cooperation between various developers irrespective of their predicted horizon of AGI creation.
Forms of cooperation that maximize AI safety practice
Encouraging the AI community to discuss and attempt to solve issues such as AI race is necessary, however it might not be sufficient. We need to find better and stronger incentives to involve actors from a wider spectrum that go beyond actors traditionally associated with developing AI systems. Cooperation can be fostered through many scenarios, such as:
AI safety research is done openly and transparently,
Access to safety research is free and anonymous: anyone can be assisted and can draw upon the knowledge base without the need to disclose themselves or what they are working on, and without fear of losing a competitive edge (a kind of “AI safety helpline”),
Alliances are inclusive towards new members,
New members are allowed and encouraged to enter global cooperation programs and alliances gradually, which should foster robust trust building and minimize burden on all parties involved. An example of gradual inclusion in an alliance or a cooperation program is to start cooperating on low-stake issues from economic competition point of view, as noted above.
Closing remarks — continuing the work on AI race avoidance
In this post we have outlined our first steps on tackling the AI race. We welcome you to join in the discussion and help us to gradually come up with ways how to minimize the danger of converging to a state in which this could be an issue.
The AI Roadmap Institute will continue to work on AI race roadmapping, identifying further actors, recognizing yet unseen perspectives, time scales and horizons, and searching for risk mitigation scenarios. We will continue to organize workshops to discuss these ideas and publish roadmaps that we create. Eventually we will help build and launch the AI Race Avoidance round of the General AI Challenge. Our aim is to engage the wider research community and to provide it with a sound background to maximize the possibility of solving this difficult problem.
Stay tuned, or even better, join in now.
About the General AI Challenge and its AI Race Avoidance round
The General AI Challenge (Challenge for short) is a citizen science project organized by general artificial intelligence R&D company GoodAI. GoodAI provided a $5mil fund to be given out in prizes throughout various rounds of the multi-year Challenge. The goal of the Challenge is to incentivize talent to tackle crucial research problems in human-level AI development and to speed up the search for safe and beneficial general artificial intelligence.
The independent AI Roadmap Institute, founded by GoodAI, collaborates with a number of other organizations and researchers on various A(G)I related issues including AI safety. The Institute’s mission is to accelerate the search for safe human-level artificial intelligence by encouraging, studying, mapping and comparing roadmaps towards this goal. The AI Roadmap Institute is currently helping to define the second round of the Challenge, AI Race Avoidance, dealing with the question of AI race avoidance (set to launch in late 2017).
Participants of the second round of the Challenge will deliver analyses and/or solutions to the problem of AI race avoidance. Their submissions will be evaluated in a two-phase evaluation process: through a) expert acceptance and b) business acceptance. The winning submissions will receive monetary prizes, provided by GoodAI.
Expert acceptance
The expert jury prize will be awarded for an idea, concept, feasibility study, or preferably an actionable strategy that shows the most promise for ensuring safe development and avoiding rushed deployment of potentially unsafe A(G)I as a result of market and competition pressure.
Business acceptance
Industry leaders will be invited to evaluate top 10 submissions from the expert jury prize and possibly a few more submissions of their choice (these may include proposals which might have a potential for a significant breakthrough while lacking in feasibility criteria)
The business acceptance prize is a way to contribute to establishing a balance between the research and the business communities.
The proposals will be treated under an open licence and will be made public together with the names of their authors. Even in the absence of a “perfect” solution, the goal of this round of the General AI Challenge should be fulfilled by advancing the work on this topic and promoting interest in AI safety across disciplines.
While working for the global management consulting company Accenture, Gregory Falco discovered just how vulnerable the technologies underlying smart cities and the “internet of things” — everyday devices that are connected to the internet or a network — are to cyberterrorism attacks.
“What happened was, I was telling sheiks and government officials all around the world about how amazing the internet of things is and how it’s going to solve all their problems and solve sustainability issues and social problems,” Falco says. “And then they asked me, ‘Is it secure?’ I looked at the security guys and they said, ‘There’s no problem.’ And then I looked under the hood myself, and there was nothing going on there.”
Falco is currently transitioning into the third and final year of his PhD within the Department of Urban Studies and Planning (DUSP). Currently, his is carrying out his research at the Computer Science and Artificial Intelligence Laboratory (CSAIL). His focus is on cybersecurity for urban critical infrastructure, and the internet of things, or IoT, is at the center of his work. A washing machine, for example, that is connected to an app on its owner’s smartphone is considered part of the IoT. There are billions of IoT devices that don’t have traditional security software because they’re built with small amounts of memory and low-power processors. This makes these devices susceptible to cyberattacks and may provide a gate for hackers to breach other devices on the same network.
Falco’s concentration is on industrial controls and embedded systems such as automatic switches found in subway systems.
“If someone decides to figure out how to access a switch by hacking another access point that is communicating with that switch, then that subway is not going to stop, and people are going to die,” Falco says. “We rely on these systems for our life functions — critical infrastructure like electric grids, water grids, or transportation systems, but also our health care systems. Insulin pumps, for example, are now connected to your smartphone.”
Citing real-world examples, Falco notes that Russian hackers were able to take down the Ukrainian capital city’s electric grid, and that Iranian hackers interfered with the computer-guided controls of a small dam in Rye Brook, New York.
Falco aims to help combat potential cyberattacks through his research. One arm of his dissertation, which he is working on with renown negotiation Professor Lawrence Susskind, is aimed at conflict negotiation, and looks at how best to negotiate with cyberterrorists. Also, with CSAIL Principal Research Scientist Howard Shrobe, Falco seeks to determine the possibility of predicting which control-systems vulnerabilities could be exploited in critical urban infrastructure. The final branch of Falco’s dissertation is in collaboration with NASA’s Jet Propulsion Laboratory. He has secured a contract to develop an artificial intelligence-powered automated attack generator that can identify all the possible ways someone could hack and destroy NASA’s systems.
“What I really intend to do for my PhD is something that is actionable to the communities I’m working with,” Falco says. “I don’t want to publish something in a book that will sit on a shelf where nobody would read it.”
“Not science fiction anymore”
Falco’s battle against cyberterrorism has also lead him to co-found NeuroMesh, a startup dedicated to protecting IoT devices by using the same techniques hackers use.
“The concept of my startup is, ‘Let’s use hacker tools to defeat hackers,’” Falco says. “If you don’t know how to break it, you don’t know how to fix it.”
One tool hackers use is called a botnet. Once botnets get on a device, they often kill off other malware on the device so that they use all the processing power on the device for themselves. Botnets also play “king of the hill” on the device, and don’t let other botnets latch on.
NeuroMesh uses a botnet’s features against itself to create a good botnet. By re-engineering the botnet, programmers can use them to defeat any kind of malware that comes onto a device.
“The benefit is also that when you look at securing IoT devices with low memory and low processing power, it’s impossible to put any security on them, but these botnets have no problem getting on there because they are so small,” Falco says.
Much like a vaccine protects against diseases, NeuroMesh applies a cyber vaccine to protect industrial devices from cyberattacks. And, by leveraging the bitcoin blockchain to update devices, NeuroMesh further fortifies the security system to block other malware from attacking vital IoT devices.
Recently, Falco and his team pitched their botnet vaccine at MIT’s $100K Accelerate competition and placed second. Falco’s infant son was in the audience while Falco was presenting how NeuroMesh’s technology could secure a baby monitor, as an example, from being hacked. The startup advanced to MIT’s prestigious 100K Launch startup competition, where they finished among the top eight competitors. NeuroMesh is now further developing its technology with the help of a grant from the Department of Energy, working with Stuart Madnick, who is the John Norris Maguire Professor at MIT, and Michael Siegel, a principal research scientist at MIT’s Sloan School of Management.
“Enemies are here. They are on our turf and in our wires. It’s not science fiction anymore,” Falco says. “We’re protecting against this. That’s what NeuroMesh is meant to do.”
The human tornado
Falco’s abundant energy has led his family to call him “the tornado.”
“One-fourth of my mind is on my startup, one-fourth on finishing my dissertation, and other half is on my 11-month-old because he comes with me when my wife works,” Falco says. “He comes to all our venture capital meetings and my presentations. He’s always around and he’s generally very good.”
As a high school student, Falco’s energy and excitement for engineering drove him to discover a new physics wave theory. Applying this to the tennis racket, he invented a new, control-enhanced method of stringing, with which he won various science competitions (and tennis matches). He used this knowledge to start a small business for stringing rackets. The thrill of business took him on a path to Cornell University’s School of Hotel Administration. After graduating early, Falco transitioned into the field of sustainability technology and energy systems, and returned to his engineering roots by earning his LEED AP (Leadership in Energy and Environmental Design) accreditation and a master’s degree in sustainability management from Columbia University.
His excitement followed him to Accenture, where he founded the smart cities division and eventually learned about the vulnerability of IoT devices. For the past three years, Falco has also been sharing his newfound knowledge about sustainability and computer science as an adjunct professor at Columbia University.
“My challenge is always to find these interdisciplinary holes because my background is so messed up. You can’t say, this guy is a computer scientist or he’s a business person or an environmental scientist because I’m all over the place,” he says.
That’s part of the reason why Falco enjoys taking care of his son, Milo, so much.
“He’s the most awesome thing ever. I see him learning and it’s really amazing,” Falco says. “Spending so much time with him is very fun. He does things that my wife gets frustrated at because he’s a ball of energy and all over the place — just like me.”
In an OpEd piece in the NY Times, and in a TED Talk late last year, Oren Etzioni, PhD, author, and CEO of the Allen Institute for Artificial Intelligence, suggested an update for Isaac Asimov’s three laws of Artificial Intelligence. Given the widespread media attention emanating from Elon Musk’s (and others) warnings, these updates might be worth reviewing.
The Warnings
In an open letter to the U.N., a group of specialists from 26 nations and led by Elon Musk called for the United Nations to ban the development and use of autonomous weapons. The signatories included Musk and DeepMind co-founder Mustafa Suleyman, as well as 100+ other leaders in robotics and artificial-intelligence companies. They write that AI technology has reached a point where the deployment of such systems in the form of autonomous weapons is feasible within years, not decades, and many in the defense industry are saying that autonomous weapons will be the third revolution in warfare, after gunpowder and nuclear arms.
Another more political warning was recently broadcast on VoA: Russian President Vladimir Putin, speaking to a group of Russian students, called artificial intelligence “not only Russia’s future but the future of the whole of mankind… The one who becomes the leader in this sphere will be the ruler of the world. There are colossal opportunities and threats that are difficult to predict now.”
Asimov’s Three Rules
Isaac Asimov wrote “Runaround” in 1942 in which there was a government Handbook of Robotics (in 2058) which included the following three rules: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
An A.I. system must be subject to the full gamut of laws that apply to its human operator.
An A.I. system must clearly disclose that it is not human.
An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.
Etzioni offered these updates to begin a discussion that would lead to a non-fictional Handbook of Robotics by the United Nations — and sooner than the 2058 sci-fi date. One that would regulate but not thwart the already growing global AI business.
And growing it is!
China’s Artificial Intelligence Manifesto
China has recently announced their long-term goal to become #1 in A.I. by 2030. They plan to grow their A.I. industry to over $22 billion by 2020, $59 billion by 2025 and $150 billion by 2030. They did this same type of long-term strategic planning for robotics – to make it an in-country industry and to transform the country from a low-cost labor source to a high-tech manufacturing resource… and it’s working.
With this major strategic long-term AI push, China is looking to rival U.S. market leaders such as Alphabet/Google, Apple, Amazon, IBM and Microsoft. China is keen not to be left behind in a technology that is increasingly pivotal — from online commerce to self-driving vehicles to energy to consumer products. China aims to catch up by solving issues including a lack of high-end computer chips, software that writes software, and trained personnel. Beijing will play a big role in policy support and regulation as well as providing and funding research, incentives and tax credits.
Premature or not, the time is now
Many in AI and robotics feel that the present state of development in AI, including improvements in machine and deep learning methods, is primitive and decades away from independent thinking. Siri and Alexa, as fun and capable as they are, are still programmed by humans and cannot even initiate a conversation or truly understand its content. Nevertheless, there is a reason why people have expressed that they sense what may be possible in the future when artificial intelligence decides what ‘it’ thinks is best for us. Consequently, global regulation can’t hurt.
Metallica’s European WorldWired tour, which opened to an ecstatic crowd of 15,000 in Copenhagen’s sold-out Royal Arena this Saturday, features a swarm of micro drones flying above the band. Shortly after the band breaks into their hit single “Moth Into Flame”, dozens of micro drones start emerging from the stage, forming a large rotating circle above the stage. As the music builds, more and more drones emerge and join the formation, creating increasingly complex patterns, culminating in a choreography of three interlocking rings that rotate in position.
This show’s debut marks the world’s first autonomous drone swarm performance in a major touring act. Unlike previous drone shows, this performance features indoor drones, flying above performers and right next to throngs of concert viewers in a live event setting. Flying immediately next to audiences creates a more intimate effect than outdoor drone shows. The same closeness also allows the creation of moving, three-dimensional sculptures like the ones seen in the video — an effect further enhanced by Metallica’s 360-degree stage setup, with concert viewers on all sides.
Flying drones close to and around people in such a setting is challenging. Unlike outdoors, indoor drones cannot rely on GPS signals, which are severely degraded in indoor settings and do not offer the required accuracy for autonomous drone navigation on stage. The safety aspects of flying dozens of drones close to crowds in the high-pressure, live-event environment impose further challenges. Robustness to the uncertainties caused by changing show conditions in a touring setting as well as variation in the drone systems’ components and sensors, including the hundreds of motors powering the drones, is another necessary condition for this drone show system.
“It’s all about safety and reliability first”, says Raffaello D’Andrea, founder of the company behind the drones used in the Metallica show, Verity Studios (full disclosure: I’m a co-founder). D’Andrea knows what he is talking about: In work with his previous company, which was snatched up by e-commerce giant Amazon for an eye-watering 775M USD in 2012, D’Andrea and his team created fleets of autonomous warehousing robots, moving inventory through the warehouse around the clock. That company, which has since been renamed Amazon Robotics, now operates up to 10,000 robots — in a single warehouse.
How was this achieved?
In a nutshell: Verity Studios’ drone show system is an advanced show automation system that uses distributed AI, robotics, and sophisticated algorithms to achieve the level of robust performance and safety required by the live entertainment industry. With a track record of >7,000 autonomous flights on Broadway, achieved with its larger Stage Flyer drones during 398 live shows, Verity Studios is no newcomer to this industry.
Many elements are needed to create a touring drone show; the drones themselves are just one aspect. Verity’s drones are autonomous, supervised by a human operator, who does not control drone motions individually. Instead, the operator only issues high-level commands such as “takeoff” or “land”, monitors the motions of multiple drones at a time, and reacts to anomalies. In other words, Verity’s advanced automation system takes over the role of multiple human pilots that would be required with standard, remote-controlled drones. The drones are flying mobile robots that navigate autonomously, piloting themselves, under human supervision. The autonomous drones’ motions and their lighting design are choreographed by Verity’s creative staff.
To navigate autonomously, drones require a reliable method for determining their position in space. As mentioned above, while drones can use GPS for their autonomous navigation in an outdoor setting, GPS is not a viable option indoors: GPS signals degrade close to large structures (e.g., tall buildings) and are usually not available, or severely degraded, in indoor environments. Since degraded GPS may result in unreliable or unsafe conditions for autonomous flight, the Verity drones use proprietary indoor localization technology.
It is the combination of a reliable indoor positioning system with intelligent autonomous drones and a suitable operator interface that allows the single operator of the Metallica show to simultaneously control the coordinated movement of many drones. This pilot-less approach is not merely a matter of increasing efficiency and effectiveness (who wants to have dozens of pilots on staff), but also a key safety requirement: Pilot errors have been an important contributing factor in dozens of documented drone accidents at live events. Safety risks rapidly increase as the number of drones increases, resulting in more complex flight plans and higher risks of mid-air collisions. Autonomous control allows safer operation of multiple drones than remote control by human pilots, especially when operating in a reduced airspace envelope.
Verity’s system also had to be engineered for safety in spite of other potential failures, including wireless interference, hardware or software component failures, power outages, or malicious disruption/hacking attacks. In its 398-show run on Broadway, the biggest challenge to safety turned out to be another factor: Human error. While operated by theater staff on Broadway, Verity’s system correctly identified human errors on five occasions and prevented the concerned drones from taking flight (on these occasions, the show continued with six or seven instead of the show’s planned eight drones; only one show proceeded without any drones as a safety precaution, i.e., the drone show’s “uptime” was 99.7%). As my colleagues and I have outlined in a recently published overview document on best practices for drone shows, when using drones at live event safety is a hard requirement.
Another key element for Verity’s show creation process are drone authoring tools. Planning shows like the Metallica performance requires tools for the efficient creation of trajectories for large numbers of drones. The trajectories must account for the drones’ actual flight dynamics, considering actuator limitations, as well as for aerodynamic effects, such as air turbulence or lift. Drone motions generated by these tools need to be collision-free and allow for emergency maneuvers. To create compelling effects, drone authoring tools also need to allow extracting all of the dynamic performance the drones are capable of — another area that D’Andrea’s team has gained considerable experience with prior to founding Verity Studios, in this case as part of research at the Swiss Federal Institute of Technology’s Flying Machine Arena.
Creating a compelling drone show requires more than the drone show system itself. For this tour, Verity Studios partnered with the world’s leading stage automation company TAIT Towers to integrate the drones into the stage floor as well as tackling a series of other technical challenges related to this touring show.
While technology is the key enabler, the starting point and the key driver of Verity’s shows are non-technological. Instead, the show is driven by the show designers’ creative intent. This comprises defining the role of show drones for the performance at hand as well as determining their integration into the visual and musical motifs of the show’s creative concept. For Metallica, the drones’ flight trajectories and lighting were created by Verity’s choreography team, incorporating feedback from Metallica’s production team and the band.
Metallica’s WorldWired tour Metallica’s WorldWired Tour is their first worldwide tour after the World Magnetic Tour six years ago. The tour’s currently published European leg runs until 11 May 2018, with all general tickets sold out.
Recording electrical signals from inside a neuron in the living brain can reveal a great deal of information about that neuron’s function and how it coordinates with other cells in the brain. However, performing this kind of recording is extremely difficult, so only a handful of neuroscience labs around the world do it.
To make this technique more widely available, MIT engineers have now devised a way to automate the process, using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell.
This technology could allow more scientists to study single neurons and learn how they interact with other cells to enable cognition, sensory perception, and other brain functions. Researchers could also use it to learn more about how neural circuits are affected by brain disorders.
“Knowing how neurons communicate is fundamental to basic and clinical neuroscience. Our hope is this technology will allow you to look at what’s happening inside a cell, in terms of neural computation, or in a disease state,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.
Boyden is the senior author of the paper, which appears in the Aug. 30 issue of Neuron. The paper’s lead author is MIT graduate student Ho-Jun Suk.
Precision guidance
For more than 30 years, neuroscientists have been using a technique known as patch clamping to record the electrical activity of cells. This method, which involves bringing a tiny, hollow glass pipette in contact with the cell membrane of a neuron, then opening up a small pore in the membrane, usually takes a graduate student or postdoc several months to learn. Learning to perform this on neurons in the living mammalian brain is even more difficult.
There are two types of patch clamping: a “blind” (not image-guided) method, which is limited because researchers cannot see where the cells are and can only record from whatever cell the pipette encounters first, and an image-guided version that allows a specific cell to be targeted.
Five years ago, Boyden and colleagues at MIT and Georgia Tech, including co-author Craig Forest, devised a way to automate the blind version of patch clamping. They created a computer algorithm that could guide the pipette to a cell based on measurements of a property called electrical impedance — which reflects how difficult it is for electricity to flow out of the pipette. If there are no cells around, electricity flows and impedance is low. When the tip hits a cell, electricity can’t flow as well and impedance goes up.
Once the pipette detects a cell, it can stop moving instantly, preventing it from poking through the membrane. A vacuum pump then applies suction to form a seal with the cell’s membrane. Then, the electrode can break through the membrane to record the cell’s internal electrical activity.
The researchers achieved very high accuracy using this technique, but it still could not be used to target a specific cell. For most studies, neuroscientists have a particular cell type they would like to learn about, Boyden says.
“It might be a cell that is compromised in autism, or is altered in schizophrenia, or a cell that is active when a memory is stored. That’s the cell that you want to know about,” he says. “You don’t want to patch a thousand cells until you find the one that is interesting.”
To enable this kind of precise targeting, the researchers set out to automate image-guided patch clamping. This technique is difficult to perform manually because, although the scientist can see the target neuron and the pipette through a microscope, he or she must compensate for the fact that nearby cells will move as the pipette enters the brain.
“It’s almost like trying to hit a moving target inside the brain, which is a delicate tissue,” Suk says. “For machines it’s easier because they can keep track of where the cell is, they can automatically move the focus of the microscope, and they can automatically move the pipette.”
By combining several imaging processing techniques, the researchers came up with an algorithm that guides the pipette to within about 25 microns of the target cell. At that point, the system begins to rely on a combination of imagery and impedance, which is more accurate at detecting contact between the pipette and the target cell than either signal alone.
The researchers imaged the cells with two-photon microscopy, a commonly used technique that uses a pulsed laser to send infrared light into the brain, lighting up cells that have been engineered to express a fluorescent protein.
Using this automated approach, the researchers were able to successfully target and record from two types of cells — a class of interneurons, which relay messages between other neurons, and a set of excitatory neurons known as pyramidal cells. They achieved a success rate of about 20 percent, which is comparable to the performance of highly trained scientists performing the process manually.
Unraveling circuits
This technology paves the way for in-depth studies of the behavior of specific neurons, which could shed light on both their normal functions and how they go awry in diseases such as Alzheimer’s or schizophrenia. For example, the interneurons that the researchers studied in this paper have been previously linked with Alzheimer’s. In a recent study of mice, led by Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and conducted in collaboration with Boyden, it was reported that inducing a specific frequency of brain wave oscillation in interneurons in the hippocampus could help to clear amyloid plaques similar to those found in Alzheimer’s patients.
“You really would love to know what’s happening in those cells,” Boyden says. “Are they signaling to specific downstream cells, which then contribute to the therapeutic result? The brain is a circuit, and to understand how a circuit works, you have to be able to monitor the components of the circuit while they are in action.”
This technique could also enable studies of fundamental questions in neuroscience, such as how individual neurons interact with each other as the brain makes a decision or recalls a memory.
Bernardo Sabatini, a professor of neurobiology at Harvard Medical School, says he is interested in adapting this technique to use in his lab, where students spend a great deal of time recording electrical activity from neurons growing in a lab dish.
“It’s silly to have amazingly intelligent students doing tedious tasks that could be done by robots,” says Sabatini, who was not involved in this study. “I would be happy to have robots do more of the experimentation so we can focus on the design and interpretation of the experiments.”
To help other labs adopt the new technology, the researchers plan to put the details of their approach on their web site, autopatcher.org.
Other co-authors include Ingrid van Welie, Suhasa Kodandaramaiah, and Brian Allen. The research was funded by Jeremy and Joyce Wertheimer, the National Institutes of Health (including the NIH Single Cell Initiative and the NIH Director’s Pioneer Award), the HHMI-Simons Faculty Scholars Program, and the New York Stem Cell Foundation-Robertson Award.
In this episode, MeiXing Dong talks with Leon Kuperman, CTO of CUJO, about cybersecurity threats and how to guard against them. They discuss how CUJO, a smart hardware firewall, helps protect the home against online threats.
Leon Kuperman
Leon Kuperman is the CTO of CUJO IoT Security. He co-founded ZENEDGE, an enterprise web application security platform, and Truition Inc. He is also the CTO of BIDZ.com.
Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.
This week we’re featuring Mike’s interview with Cory Kidd. Dr. Kidd is focused on innovating within the rapidly changing healthcare technology market. He is the founder and CEO of Catalia Health, a company that delivers patient engagement across a variety of chronic conditions.
You can find all the interviews here. We’ll be posting them regularly on Robohub.