In this episode, Audrow Nash interviews Katsu Yamane, Senior Research Scientist at Disney, about robotics in Disney. Yamane discusses Disney’s history with robots, how Disney currently uses Robots, how designing robots at Disney is different than in academia or industry, a realistic robot simulator used by Disney’s animators, and on becoming a Disney Research “Imagineer.”
Katsu Yamane
Katsu received his PhD in mechanical engineering from University of Tokyo in 2002. Following postdoctoral work at Carnegie Mellon University from 2002 to 2003, he was a faculty member at University of Tokyo until he joined Disney Research, Pittsburgh, in October 2008. His main research area is humanoid robot control and motion synthesis, in particular methods involving human motion data and dynamic balancing. He is also interested in developing algorithms for creating character animation. He has always been fascinated by the way humans control their bodies, which led him to the research on biomechanical human modeling and simulation to understand human sensation and motor control.
The field of drone delivery is currently a big topic in robotics. However, the reason that your internet shopping doesn’t yet arrive via drone is that current flying robots can prove a safety risk to people and are difficult to transport and store.
A team from the Floreano Lab, NCCR Robotics and EPFL present a new type of cargo drone that is inspired by origami, is lightweight and easily manoeuvrable and uses a foldaway cage to ensure safety and transportability.
A foldable protective cage sits around a multicopter and around the package to be carried, shielding spinning propellers and ensuring safety of all people around it. When the folding cage is opened in order to either load or unload the drone, a safety mechanism ensures that the engine is cut off, meaning that safety is ensured, even with completely untrained users.
But where this drone takes a step forward is in the folding cage, ensuring that it can be easily stowed away and transported. The team took inspiration from the origami folding shelters that have been developed for space exploration and adapted them to create a chinese lantern shape, and instead of using paper, a skeletal structure is created using carbon fibre tubes and 3D printed flexible joints. The cage is opened and closed using a joint mechanism on the top and bottom and pushing apart the resulting gap – in fact, both opening and closing of the cage and be performed in just 1.2 seconds.
By adding such a cage to a multicopter, the team ensure safety for those who come into contact with the drone. The drone can be caught while it’s flying, meaning that it can deliver to people caught in places where landing is hard or even impossible, such as a collapsed building during search and rescue missions, where first aid, medication, water or food may need to be delivered quickly.
Currently, the drone is able to carry 0.5 kg cargo for 2 km, and any visitors to EPFL over this summer would have noticed it being used to transport small items across campus 150 times, but it is hoped that by scaling, it may be able to carry as much as 2 kg over 15 km, a weight and distance that would allow for longer distance deliveries.
Reference:
P.M. Kornatowski, S. Mintchev, and D. Floreano, “An origami-inspired cargo drone”, in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017.
Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.
This week we’re featuring Mike’s interview with Abdelrahman Elogeel. Abdelrahman is a Software Development Engineer in the Core Machine Learning team at Amazon Robotics. His work includes bringing state-of-the-art machine learning techniques to tackle various problems for robots at Amazon’s robotic fulfilment centers.
You can find all the interviews here. We’ll be posting them regularly on Robohub.
NHTSA released their latest draft robocar regulations just a week after the U.S. House passed a new regulatory regime and the senate started working on its own. The proposed regulations preempt state regulation of vehicle design, and allow companies to apply for high volume exemptions from the standards that exist for human-driven cars.
It’s clear that the new approach will be quite different from the Obama-era one, much more hands-off. There are not a lot of things to like about the Trump administration but this could be one of them. The prior regulations reached 116 pages with much detail, though they were mostly listed as “voluntary.” I wrote a long critique of the regulations in a 4 part series which can be found in my NHTSA tag. They seem to have paid attention to that commentary and the similar commentary of others.
At 26 pages, the new report is much more modest, and actually says very little. Indeed, I could sum it up as follows:
Do the stuff you’re already doing
Pay attention to where and when your car can drive and document that
Document your processes internally and for the public
Go to the existing standards bodies (SAE, ISO etc.) for guidance
Create a standard data format for your incident logs
Don’t forget all the work on crash avoidance, survival and post-crash safety in modern cars that we worked very hard on
Plans for how states and the feds will work together on regulating this
Goals vs. Approaches
The document does a better job at understanding the difference between goals — public goods that it is the government’s role to promote — and approaches to those goals, which should be entirely the province of industry.
The new document is much more explicit that the 12 “safety design elements” are voluntary. I continue to believe that there is a risk they may not be truly voluntary, as there will be great pressure to conform with them, and possible increased liability for those who don’t, but the new document tries to avoid that, and its requests are much milder.
The document understands the important realization that developers in this space will be creating new paths to safety and establishing new and different concepts of best practices. Existing standards have value, but they can at best encode conventional wisdom. Robocars will not be created using conventional wisdom. The new document takes the approach of more likely recommending that the existing standards be considered, which is a reasonable plan.
A lightweight regulatory philosophy
My own analysis is guided by a lightweight regulatory approach which has been the norm until now. The government’s role is to determine important public goals and interests, and to use regulations and enforcement when, and only when, it becomes clear that industry can’t be trusted to meet these goals on its own.
In particular, the government should very rarely regulate how something should be done, and focus instead on what needs to happen as the end result, and why. In the past, all automotive safety technologies were developed by vendors and deployed, sometimes for decades, before they were regulated. When they were regulated, it was more along the lines of “All cars should now have anti-lock brakes.” Only with the more mature technologies have the regulations had to go into detail on how to build them.
Worthwhile public goals include safety, of course, and the promotion of innovation. We want to encourage both competition and cooperation in the right places. We want to protect consumer rights and privacy. (The prior regulations proposed a mandatory sharing of incident data which is watered down greatly in these new regulations.)
I call this lightweight because others have called for a great deal more regulation. I don’t, however, view it is highly laissez-faire. Driving is already highly regulated, and the idea that regulators would need to write rules to prevent companies from doing things they have shown no evidence of doing seems odd to me. Particularly in a fast-changing field where regulators (and even developers) admit they have limited knowledge of what the technology’s final form will actually be.
Stating the obvious
While I laud the reduction of detail in these regulations, it’s worth pointing out that many of the remaining sections are stripped to the point of mostly outlining “motherhood” requirements — requirements which are obvious and that every developer has known for some time. You don’t have to say that the vehicle should follow the vehicle code and not hit other cars. Anybody who needs to be told that is not a robocar developer. The set of obvious goals belongs better in a non-governmental advice document (which this does in fact declare itself in part to be, though of course governmental) than in something considered regulatory.
Overstating the obvious and discouraging the “black box.”
Sometimes a statement can be both obvious but also possibly wrong in the light of new technology. The document has many requirements that vendors document their thinking and processes which may be very difficult to do with systems built with machine learning. Machine learning sometimes produces a “black box” that works, but there is minimal knowledge as to how it works. It may be that such systems will outperform other systems, leaving us with the dilemma of choosing between a superior system we can’t document and understand, and an inferior one we can.
There is a new research area known as “explainable AI” which hopes to bridge this gap and make it possible to document and understand why machine learning systems operate as they do. This is promising research but it may never be complete. In spite of this, EU regulations currently are already attempting to forbid unexplainable AI. This may cut off very productive avenues of development — we don’t know enough to be sure about this as yet.
Some minor notes
The name
The new report pushes a new term — Automated Driving Systems. It seems every iteration comes up with a new name. The field is really starting to need a name people agree on, since nobody seems to much like driverless cars, self-driving cars, autonomous vehicles, automated vehicles, robocars or any of the others. This one is just as unwieldy, and its acronym is an English word and thus hard to search for.
The levels
The SAE levels continue to be used. I have been critical of the levels before, recently in this satire. It is wrong to try to understand robocars primarily through the role of humans in their operation, and wrong to suggest there is a progression of levels based on that.
The 12 safety elements
As noted, most of the sections simply advise obvious policies which everybody is already doing, and advise that teams document what they are doing.
1. System Safety
This section is modest, and describes fairly common existing practices for high reliability software systems. (Almost to the point that there is no real need for the government to point them out.)
2. Operational Design Domain
The idea of defining the situations where the car can do certain things is a much better approach than imagining levels of human involvement. I would even suggest it replace the levels, and the human seen simply as one of the tools to be used to operate outside of certain domains. Still, I see minimal need for NHTSA to say this — everybody already knows that roads and their conditions are different and complex and need different classes of technology.
3. Object and Event Detection and Response, 4. Fallback, 5. Validation, 6. HMI
Again, this is fairly redundant. Vendors don’t need to be told that vehicles must obey the vehicle code and stay in their lane and not hit things. That’s already the law. They know that only with a fallback strategy can they approach the reliability needed.
7. Computer Security
While everything here is already on the minds of developers, I don’t fault the reminder here because traditional automakers have a history of having done security badly. The call for a central clearing house on attacks is good, though it should not necessarily be Auto-ISAC.
8. Occupant Protection
A great deal of the current FMVSS (Federal Motor Vehicle Safety Standards) are about this, and because many vehicles may use exemptions from FMVSS to get going, a reminder about this is in order.
10. Data Recording
The most interesting proposal in the prior document was a requirement for public sharing of incident and crash data so that all teams could learn from every problem any team encounters. This would speed up development and improve safety, but vendors don’t like the fact it removes a key competitive edge — their corpus of driving experience.
The new document calls for a standard data format, and makes general motherhood calls for storing data in a crash, something everybody already does.
The call for a standard is actually difficult. Every vehicle has a different sensor suite and its own tools to examine the sensor data. Trying to standardize that on a truly useful level is a serious task. I had expected this task to fall to outside testing companies, who would learn (possibly reverse engineering) the data formats of each car and try to put them in a standard format that was actually useful. I fear a standard agreed upon by major players (who don’t want to share their data) will be minimal and less useful.
State Roles
A large section of the document is about the bureaucratic distribution of roles between states and federal bodies. I will provide analysis of this later.
Conclusion
This document reflects a major change, almost a reversal, and largely a positive one. Going forward from here, I would encourage that the debate on regulation focus on
What public goods does the government have an interest in protecting?
Which ones are vendors showing they can’t be trusted to support voluntarily, both by present actions and past history?
How can innovation be encouraged and facilitated, and good communication be made to the public about what’s going on
One of the key public goods missing from this document is privacy protection. This is one of the areas where vendors don’t have a great past history.
Another one is civil rights protection — for example what powers police will want over cars — where the government has a bad history.
If you follow the robotics community on the twittersphere, you’ll have noticed that Rodney Brooks is publishing a series of essays on the future of robotics and AI which has been gathering wide attention.
His articles are designed to be read as stand alone essays, and in any order. Robohub will be featuring links to the articles as they come out over the next 6 months or so. They are worth the read.
As 3-D printing has become a mainstream technology, industry and academic researchers have been investigating printable structures that will fold themselves into useful three-dimensional shapes when heated or immersed in water.
In a paper appearing in the American Chemical Society’s journal Applied Materials and Interfaces, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and colleagues report something new: a printable structure that begins to fold itself up as soon as it’s peeled off the printing platform.
One of the big advantages of devices that self-fold without any outside stimulus, the researchers say, is that they can involve a wider range of materials and more delicate structures.
“If you want to add printed electronics, you’re generally going to be using some organic materials, because a majority of printed electronics rely on them,” says Subramanian Sundaram, an MIT graduate student in electrical engineering and computer science and first author on the paper. “These materials are often very, very sensitive to moisture and temperature. So if you have these electronics and parts, and you want to initiate folds in them, you wouldn’t want to dunk them in water or heat them, because then your electronics are going to degrade.”
To illustrate this idea, the researchers built a prototype self-folding printable device that includes electrical leads and a polymer “pixel” that changes from transparent to opaque when a voltage is applied to it. The device, which is a variation on the “printable goldbug” that Sundaram and his colleagues announced earlier this year, starts out looking something like the letter “H.” But each of the legs of the H folds itself in two different directions, producing a tabletop shape.
The researchers also built several different versions of the same basic hinge design, which show that they can control the precise angle at which a joint folds. In tests, they forcibly straightened the hinges by attaching them to a weight, but when the weight was removed, the hinges resumed their original folds.
In the short term, the technique could enable the custom manufacture of sensors, displays, or antennas whose functionality depends on their three-dimensional shape. Longer term, the researchers envision the possibility of printable robots.
Sundaram is joined on the paper by his advisor, Wojciech Matusik, an associate professor of electrical engineering and computer science (EECS) at MIT; Marc Baldo, also an associate professor of EECS, who specializes in organic electronics; David Kim, a technical assistant in Matusik’s Computational Fabrication Group; and Ryan Hayward, a professor of polymer science and engineering at the University of Massachusetts at Amherst.
This clip shows an example of an accelerated fold. (Image: Tom Buehler/CSAIL)
Stress relief
The key to the researchers’ design is a new printer-ink material that expands after it solidifies, which is unusual. Most printer-ink materials contract slightly as they solidify, a technical limitation that designers frequently have to work around.
Printed devices are built up in layers, and in their prototypes the MIT researchers deposit their expanding material at precise locations in either the top or bottom few layers. The bottom layer adheres slightly to the printer platform, and that adhesion is enough to hold the device flat as the layers are built up. But as soon as the finished device is peeled off the platform, the joints made from the new material begin to expand, bending the device in the opposite direction.
Like many technological breakthroughs, the CSAIL researchers’ discovery of the material was an accident. Most of the printer materials used by Matusik’s Computational Fabrication Group are combinations of polymers, long molecules that consist of chainlike repetitions of single molecular components, or monomers. Mixing these components is one method for creating printer inks with specific physical properties.
While trying to develop an ink that yielded more flexible printed components, the CSAIL researchers inadvertently hit upon one that expanded slightly after it hardened. They immediately recognized the potential utility of expanding polymers and began experimenting with modifications of the mixture, until they arrived at a recipe that let them build joints that would expand enough to fold a printed device in half.
Whys and wherefores
Hayward’s contribution to the paper was to help the MIT team explain the material’s expansion. The ink that produces the most forceful expansion includes several long molecular chains and one much shorter chain, made up of the monomer isooctyl acrylate. When a layer of the ink is exposed to ultraviolet light — or “cured,” a process commonly used in 3-D printing to harden materials deposited as liquids — the long chains connect to each other, producing a rigid thicket of tangled molecules.
When another layer of the material is deposited on top of the first, the small chains of isooctyl acrylate in the top, liquid layer sink down into the lower, more rigid layer. There, they interact with the longer chains to exert an expansive force, which the adhesion to the printing platform temporarily resists.
The researchers hope that a better theoretical understanding of the reason for the material’s expansion will enable them to design material tailored to specific applications — including materials that resist the 1–3 percent contraction typical of many printed polymers after curing.
“This work is exciting because it provides a way to create functional electronics on 3-D objects,” says Michael Dickey, a professor of chemical engineering at North Carolina State University. “Typically, electronic processing is done in a planar, 2-D fashion and thus needs a flat surface. The work here provides a route to create electronics using more conventional planar techniques on a 2-D surface and then transform them into a 3-D shape, while retaining the function of the electronics. The transformation happens by a clever trick to build stress into the materials during printing.”
The NTSB (National Transportation Safety Board) has released a preliminary report on the fatal Tesla crash with the full report expected later this week. The report is much less favourable to autopilots than their earlier evaluation.
(This is a giant news day for Robocars. Today NHTSA also released their new draft robocar regulations which appear to be much simpler than the earlier 116 page document that I was very critical of last year. It’s a busy day, so I will be posting a more detailed evaluation of the new regulations — and the proposed new robocar laws from the House — later in the week.)
The earlier NTSB report indicated that though the autopilot had its flaws, overall the system was working. This is to say that though drivers were misusing the autopilot, the combined system including drivers not misusing the autopilot combined with those who did, was overall safer than drivers with no autopilot. The new report makes it clear that this does not excuse the autopilot being so easy to abuse. (By abuse, I mean ignore the warnings and treat it like a robocar, letting it driving you without actively monitoring the road, ready to take control.)
While the report mostly faults the truck driver for turning at the wrong time, it blames Tesla for not doing a good enough job to assure that the driver is not abusing the autopilot. Tesla makes you touch the wheel every so often, but NTSB notes that it is possible to touch the wheel without actually looking at the road. NTSB also is concerned that the autopilot can operate in this fashion even on roads it was not designed for. They note that Tesla has improved some of these things since the accident.
This means that “touch the wheel” systems will probably not be considered acceptable in the future, and there will have to be some means of assuring the driver is really paying attention. Some vendors have decided to put in cameras that watch the driver or in particular the driver’s eyes to check for attention. After the Tesla accident, I proposed a system which tested driver attention from time to time and punished them if they were not paying attention which could do the job without adding new hardware.
It also seems that autopilot cars will need to have maps of what roads they work on and which they don’t, and limit features based on the type of road you’re on.
Summer is not without its annoyances — mosquitos, wasps, and ants, to name a few. As the cool breeze of September pushes us back to work, labs across the country are reconvening tackling nature’s hardest problems. Sometimes forces that seem diametrically opposed come together in beautiful ways, like robotics infused into living organisms.
This past summer, researchers at Harvard and Arizona State University collaborated on successfully turning living E. Coli bacteria into a cellular robot, called a “ribocomputer.” By taking archived footage of movies, the Harvard scientists were able to successfully store the digital content on the bacteria that is most famous for making Chipotle customers violently ill. According to Seth Shipman, lead researcher at Harvard, this was the first time anyone has archived data onto a living organism.
In responding to the original article published in July in Nature, Julius Lucks, a bioengineer at Northwestern University, said that Shipman’s discovery will enable wider exploitation of DNA encoding. “What these papers represent is just how good we are getting at harnessing that power,” explained Lucks. The key to the discovery was Shipman’s ability to disguise the movie pixels into DNA’s four letter code: “molecules represented by the letters A,T,G and C—and synthesized that DNA. But instead of generating one long strand of code, they arranged it, along with other genetic elements, into short segments that looked like fragments of viral DNA.” Another important factor was E.coli‘s natural ability “to grab errant pieces of viral DNA and store them in its own genome—a way of keeping a chronological record of invaders. So when the researchers introduced the pieces of movie-turned-synthetic DNA—disguised as viral DNA—E. coli’s molecular machinery grabbed them and filed them away.”
Shipman used this methodology to eventually turn the cells into a computer that not only stores data, but actually perform logic-based decisions. Partnering with Alexander Green at Arizona State University’s Biodesign Institute, the two institutions collaborated on building their ribocomoputer which programmed bacteria with ribonucleic acid or RNA. According to Green, the “ribocomputer can evaluate up to a dozen inputs, make logic-based decisions using AND, OR, and NOT operations, and give the cell commands.” Green stated that this is the most complex biological computer created on a living cell to date. The discovery by Green and Shipman means that cells could now be programmed to self-destruct if they sense the presence of cancer markers, or even heal the body from within by attacking foreign toxins.
Timothy Lu of MIT, called the discovery the beginning of the “golden age of circuit design.” Lu further said “The way that electrical engineers have gone about establishing design hierarchy or abstraction layers — I think that’s going to be really important for biology. ” In a recent IEEE article, Lucks cautioned readers about the discovery of perverting nature which can ultimately lead to a host of ethical considerations, “I don’t think anybody would really argue that it’s unethical to do this in E. coli. But as you go up in the chain [of living organisms], it gets more interesting from an ethical point of view.”
Nature has the inspiration for numerous discoveries in modern robotics, and has even created its own field of biomimicry. However, manipulating living organisms according to the whims of humans is just beginning to take shape. A couple of years ago, Hong Liang, a researcher at Texas A&M University, outfitted a cockroach with 3g backpack-like device that had a microprocessor, lithium battery, camera sensor, and electrical/nerve control system. Liang then used her make-shift insect robo-suit to remotely drive the waterbug through a maze.
When asked by the Guardian, what prompted Laing to utilize bugs as robots, she explained, “Insects can do things a robot cannot. They can go into small places, sense the environment, and if there’s movement, from a predator say, they can escape much better than a system designed by a human. We wanted to find ways to work with them.”
Liang believes that robo-roaches could be especially useful in disaster recovery situations that maximize the size of the insect along with its endurance. Liang says that some cockroaches can carry five times their own bodyweight, but the heavier the load, the greater the toll it takes on their performance. “We did an endurance test and they do get tired,” Liang explained. “We put them on a treadmill for a minute and then let them rest. If the backpack is lighter, they can go on for longer.” Laing has inspired other labs to work with different species of insects.
Draper, the US defense contractor, is working on its own insect robot by turning live dragonflies into controllable undetected drones. The DragonflEye Project is a deviation from the technique developed by Laing, as it uses light to steer neurons instead of electrical nerve stimulation. According to Jesse Wheeler, the project lead for Draper, he says that this methodology is like “a joystick that tells the system how to coordinate flight activities.” Through Wheeler’s “joystick” he is able to control and steer the wings inflight and program coordinates to the bug for mission directions via his own attached micro backpack that includes a guidance system, solar energy cells, navigation cells, and optical stimulation.
Draper believes that swarms of digitally enhanced insects might hold the key to national defense as locusts and bees have been programmed to identify scents, such as chemical explosives. The critters could be eventually programmed to collect and analyze samples for homeland security, in addition to obvious surveillance opportunities. Liang boasts that her cyborg roaches are “more versatile and flexible, and they require less control,” than traditional robots. However, Liang also reminds us that “they’re more real” as ultimately living organisms even with mechanical backpacks are not machines.
Author’s note: This topic and more will be discussed at our next RobotLabNYC event in one week on September 19th at 6pm, “Investing In Unmanned Systems,” with experts from NASA, AUVSI, and Genius NY.
This summer, a survey released by the American Automobile Association showed that 78 percent of Americans feared riding in a self-driving car, with just 19 percent trusting the technology. What might it take to alter public opinion on the issue? Iyad Rahwan, the AT&T Career Development Professor in the MIT Media Lab, has studied the issue at length, and, along with Jean-Francois Bonnefon of the Toulouse School of Economics and Azim Shariff of the University of California at Irvine, has authored a new commentary on the subject, titled, “Psychological roadblocks to the adoption of self-driving vehicles,” published today in Nature Human Behavior. Rahwan spoke to MIT News about the hurdles automakers face if they want greater public buy-in for autonomous vehicles.
Q: Your new paper states that when it comes to autonomous vehicles, trust “will determine how widely they are adopted by consumers, and how tolerated they are by everyone else.” Why is this?
A: It’s a new kind of agent in the world. We’ve always built tools and had to trust that technology will function in the way it was intended. We’ve had to trust that the materials are reliable and don’t have health hazards, and that there are consumer protection entities that promote the interests of consumers. But these are passive products that we choose to use. For the first time in history we are building objects that are proactive and have autonomy and are even adaptive. They are learning behaviors that may be different from the ones they were originally programmed for. We don’t really know how to get people to trust such entities, because humans don’t have mental models of what these entities are, what they’re capable of, how they learn.
Before we can trust machines like autonomous vehicles, we have a number of challenges. The first is technical: the challenge of building an AI [artificial intelligence] system that can drive a car. The second is legal and regulatory: Who is liable for different kinds of faults? A third class of challenges is psychological. Unless people are comfortable putting their lives in the hands of AI, then none of this will matter. People won’t buy the product, the economics won’t work, and that’s the end of the story. What we’re trying to highlight in this paper is that these psychological challenges have to be taken seriously, even if [people] are irrational in the way they assess risk, even if the technology is safe and the legal framework is reliable.
Q: What are the specific psychological issues people have with autonomous vehicles?
A: We classify three psychological challenges that we think are fairly big. One of them is dilemmas: A lot of people are concerned about how autonomous vehicles will resolve ethical dilemmas. How will they decide, for example, whether to prioritize safety for the passenger or safety for pedestrians? Should this influence the way in which the car makes a decision about relative risk? And what we’re finding is that people have an idea about how to solve this dilemma: The car should just minimize harm. But the problem is that people are not willing to buy such cars, because they want to buy cars that will always prioritize themselves.
A second one is that people don’t always reason about risk in an unbiased way. People may overplay the risk of dying in a car crash caused by an autonomous vehicle even if autonomous vehicles are, on the average, safer. We’ve seen this kind of overreaction in other fields. Many people are afraid of flying even though you’re incredibly less likely to die from a plane crash than a car crash. So people don’t always reason about risk.
The third class of psychological challenges is this idea that we don’t always have transparency about what the car is thinking and why it’s doing what it’s doing. The carmaker has better knowledge of what the car thinks and how it behaves … which makes it more difficult for people to predict the behavior of autonomous vehicles, which can also dimish trust. One of the preconditions of trust is predictability: If I can trust that you will behave in a particular way, I can behave according to that expectation.
Q: In the paper you state that autonomous vehicles are better depicted “as being perfected, not as perfect.” In essence, is that your advice to the auto industry?
A: Yes, I think setting up very high expectations can be a recipe for disaster, because if you overpromise and underdeliver, you get in trouble. That is not to say that we should underpromise. We should just be a bit realistic about what we promise. If the promise is an improvement on the current status quo, that is, a reduction in risk to everyone, both pedestrians as well as passengers in cars, that’s an admirable goal. Even if we achieve it in a small way, that’s already progress that we should take seriously. I think being transparent about that, and being transparent about the progress being made toward that goal, is crucial.
In episode eight of season three we return to the epic (or maybe not so epic) clash between frequentists and bayesians, take a listener question about the ethical questions generators of machine learning should be asking of themselves (not just their tools) and we hear a conversation with Ernest Mwebaze of Makerere University.
If you enjoyed this episode, you may also want to listen to:
IBM and MIT today announced that IBM plans to make a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT. The lab will carry out fundamental artificial intelligence (AI) research and seek to propel scientific breakthroughs that unlock the potential of AI. The collaboration aims to advance AI hardware, software, and algorithms related to deep learning and other areas; increase AI’s impact on industries, such as health care and cybersecurity; and explore the economic and ethical implications of AI on society. IBM’s $240 million investment in the lab will support research by IBM and MIT scientists.
The new lab will be one of the largest long-term university-industry AI collaborations to date, mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab in Cambridge, Massachusetts — co-located with the IBM Watson Health and IBM Security headquarters in Kendall Square — and on the neighboring MIT campus.
The lab will be co-chaired by Dario Gil, IBM Research VP of AI and IBM Q, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. (Read a related Q&A with Chandrakasan.) IBM and MIT plan to issue a call for proposals to MIT researchers and IBM scientists to submit their ideas for joint research to push the boundaries in AI science and technology in several areas, including:
AI algorithms: Developing advanced algorithms to expand capabilities in machine learning and reasoning. Researchers will create AI systems that move beyond specialized tasks to tackle more complex problems and benefit from robust, continuous learning. Researchers will invent new algorithms that can not only leverage big data when available, but also learn from limited data to augment human intelligence.
Physics of AI: Investigating new AI hardware materials, devices, and architectures that will support future analog computational approaches to AI model training and deployment, as well as the intersection of quantum computing and machine learning. The latter involves using AI to help characterize and improve quantum devices, and researching the use of quantum computing to optimize and speed up machine-learning algorithms and other AI applications.
Application of AI to industries: Given its location in IBM Watson Health and IBM Security headquarters in Kendall Square, a global hub of biomedical innovation, the lab will develop new applications of AI for professional use, including fields such as health care and cybersecurity. The collaboration will explore the use of AI in areas such as the security and privacy of medical data, personalization of health care, image analysis, and the optimum treatment paths for specific patients.
Advancing shared prosperity through AI: The MIT–IBM Watson AI Lab will explore how AI can deliver economic and societal benefits to a broader range of people, nations, and enterprises. The lab will study the economic implications of AI and investigate how AI can improve prosperity and help individuals achieve more in their lives.
In addition to IBM’s plan to produce innovations that advance the frontiers of AI, a distinct objective of the new lab is to encourage MIT faculty and students to launch companies that will focus on commercializing AI inventions and technologies that are developed at the lab. The lab’s scientists also will publish their work, contribute to the release of open source material, and foster an adherence to the ethical application of AI.
“The field of artificial intelligence has experienced incredible growth and progress over the past decade. Yet today’s AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives,” says John Kelly III, IBM senior vice president, Cognitive Solutions and Research. “The extremely broad and deep technical capabilities and talent at MIT and IBM are unmatched, and will lead the field of AI for at least the next decade.”
“I am delighted by this new collaboration,” MIT President L. Rafael Reif says. “True breakthroughs are often the result of fresh thinking inspired by new kinds of research teams. The combined MIT and IBM talent dedicated to this new effort will bring formidable power to a field with staggering potential to advance knowledge and help solve important challenges.”
Both MIT and IBM have been pioneers in artificial intelligence research, and the new AI lab builds on a decades-long research relationship between the two. In 2016, IBM Research announced a multiyear collaboration with MIT’s Department of Brain and Cognitive Sciences to advance the scientific field of machine vision, a core aspect of artificial intelligence. The collaboration has brought together leading brain, cognitive, and computer scientists to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision. In addition, IBM and the Broad Institute of MIT and Harvard have established a five-year, $50 million research collaboration on AI and genomics.
MIT researchers were among those who helped coin and popularize the very phrase “artificial intelligence” in the 1950s. MIT pushed several major advances in the subsequent decades, from neural networks to data encryption to quantum computing to crowdsourcing. Marvin Minsky, a founder of the discipline, collaborated on building the first artificial neural network and he, along with Seymour Papert, advanced learning algorithms. Currently, the Computer Science and Artificial Intelligence Laboratory, the Media Lab, the Department of Brain and Cognitive Sciences, and the MIT Institute for Data, Systems, and Society serve as connected hubs for AI and related research at MIT.
For more than 20 years, IBM has explored the application of AI across many areas and industries. IBM researchers invented and built Watson, which is a cloud-based AI platform being used by businesses, developers, and universities to fight cancer, improve classroom learning, minimize pollution, enhance agriculture and oil and gas exploration, better manage financial investments, and much more. Today, IBM scientists across the globe are working on fundamental advances in AI algorithms, science and technology that will pave the way for the next generation of artificially intelligent systems.
For information about employment opportunities with IBM at the new AI Lab, please visit MITIBMWatsonAILab.mit.edu.
Aerospace conglomerate United Technologies is paying $30 billion to acquire Rockwell Collins in a deal that creates one of the world’s largest makers of civilian and defense aircraft components. Rockwell Collins and United’s Aerospace Systems segment will combine to create a new business unit named Collins Aerospace Systems.
United Technologies will pay $140 per share for Rockwell Collins shares; $93.33 in cash and $46.67 in stock. The $140 price represents a 17.6% premium for Rockwell shareholders.
“This acquisition adds tremendous capabilities to our aerospace businesses and strengthens our complementary offerings of technologically advanced aerospace systems,” said UTC’s chairman and CEO, Greg Hayes.
Both companies have subsidiaries involved in robotics, drones and marine systems but both derive most of their revenue from civilian and defense aerospace.
United Technologies includes Otis elevators, escalators and moving walkways; Pratt & Whitney designs and manufactures military and commercial engines, power units and turbojet products; Carrier heating, air-conditioning and refrigeration products; Chubb security and fire-safety solutions; Kidde smoke alarms and fire safety technology; and UTC aerospace systems which provide aircraft interiors, space and ISR systems, landing gear and sensors and sensor-based systems for everything from ice detection to guidance and navigation. Their Aerospace Systems unit has a wide range of products for multiple unmanned platforms including unmanned underwater vehicles (UUVs).
Rockwell Collins (not to be confused with (or involved in this acquisition) Rockwell Automation* which is highly involved in robotics) designs and produces electronic communications, avionics and in-flight entertainment systems for commercial, military and government customers and includes navigation and display systems for unmanned commercial and military vehicles. Their electronics are installed in nearly every airline cockpit in the world. Their helmet mounted display systems and in-car head-up displays are also big revenue producers.
According to Reuters, “The deal also follows a wave of consolidation among smaller aerospace manufacturers in recent years that was caused in part by the need to invest in new technologies such as metal 3-D printing and connected factories to stay competitive. A combined United Technologies and Rockwell Collins could similarly invest, and their broad portfolios have little overlap.”
________________
Rockwell Automation is one of 83 members of the ROBO-Global Robotics & Automation Index of which I am a co-founder. The index is designed to leverage advances in technology and macro economic drivers to capture growth opportunity from robotics and automation.
How will robocars fare in a disaster, like Harvey in Houston, Irma, or the tsunamis in Japan or Indonesia, or a big Earthquake, or a fire, or 9/11, or a war?
These are very complex questions, and certainly most teams developing cars have not spent a lot of time on solutions to them at present. Indeed, I expect that these will not be solved issues until after the first significant pilot projects are deployed, because as long as robocars are a small fraction of the car population, they will not have that much effect on how things go. Some people who have given up car ownership for robocars — not that many in the early days — will possibly find themselves hunting for transportation the way other people who don’t own cars do today.
It’s a different story when, perhaps a decade from now, we get significant numbers of people who don’t own cars and rely on robocar transportation. That means people who don’t have any cars, not the larger number of people who have dropped from 2 cars to 1 thanks to robocar services.
I addressed a few of these questions before regarding Tsunamis and Earthquakes.
A few key questions should be addressed:
How will the car fleets deal with massively increased demand during evacuations and flight during an emergency?
How will the cars deal with shutdown and overload of the mobile data networks, if it happens?
How will cars deal with things like floods, storms, earthquakes and more which block roads or make travel unsafe on certain roads?
Most of these issues revolve around fleets. Privately owned robocars will tend to have steering wheels and be usable as regular cars, and so only improve the situation. If they encounter unsafe roads, they will ask their passengers for guidance, or full driving. (However, in a few decades, their passengers may no longer be very capable at driving but the car will handle the hard parts and leave them just to provide video-game style directions.)
Increased demand
An immediately positive thing is the potential ability for private robocars to, once they have taken their owners to safety, drive back into the evacuation zone as temporary fleet cars, and fetch other people, starting with those selected by the car’s owner, but also members of the public needing assistance. This should dramatically increase the ability of the car fleet to get people moved.
Nonetheless, it is often noted that in a robocar taxi world, there don’t need to be nearly as many cars in a city as we have today. With ideal efficiency, there would be exactly enough seats to handle the annual peak, but few more. We might drop to just 1/4 of the cars, and we might also have many of them be only 1 or 2 seater cars. There will be far fewer SUVs, pickup trucks, minivans and other large cars, because we don’t really need nearly as many as we have today.
To counter this, mandatory carpooling may become required. This will be fought because it means you don’t get to take all the physical stuff you want to bring in the event of something like flooding. Worse, we might see conflict between people wanting to bring pets (in carriers) which could take seats which might be used by people. In a very urgent situation, we could see an order coming down requiring pets to be left behind lest people be left behind. Once it’s clear all the people will make it out, people or rescue workers could go back for the pets, but that’s not very tenable.
One solution, if there is enough warning of the disaster (as there is for storms but not for other disasters) is for cars from outside the region to be pressed into service. There may be millions of cars within 1-2 hours drive of a disaster zone, allowing a major increase in capacity within a short time. This would include private cars as well as taxi fleets from other cities. Those cities would face a reduction in service and more carpooling. As I write this, Irma is heading to Florida but it is uncertain where. Fleets of cars could, in such a situation, be on their way from other states, distributing themselves along the probable path, and improving the position as the path is better known. They could be ready to do a very quick mass evacuation once the forecast is certain, and return home if nothing happens. For efficiency they could also drive themselves to be placed on car carriers and trains.
Disaster management agencies will need to build tools that calculate how many people need to be moved, and what capacity exists to move them. This will let them calculate how much excess capacity is there to move pets and household possessions.
If there are robotic transit vehicles they could help a lot. Cars might ferry people from homes to stations where robotic buses (including those from other cities, and human driven buses) could carry lots of people. Then they could return, with no risk to a driver.
The last hopeful item is the ability to do better traffic management. That’s an issue in any disaster, as people will resist following rules. If people get used to the idea of line direction reassignment, it can do a lot here. Major routes could be changed to have only one lane going towards the disaster area. That lane would be robocar or emergency vehicle only. The next lane would be going out of the disaster area, and it would be strictly robocar only. The robocars could safety drive in opposite directions at high speed without a barrier, and they could provide a buffer for the human driven cars in all the other lanes. One simple solution might be to have the inbound lanes converted to robocar only, with allocation of lanes based on traffic demand. The outbound lanes would remain outbound and have a mix of cars. The buses would use the robocar lanes for a congestion free quick trip out and in.
Saving cars
It is estimated that as many as a million cars were destroyed by flooding in Harvey. Fortunately, robocars could be told to wake up and leave, even if not pressed into service for evacuation. With good knowledge of the topography of the land, they can also calculate the depth of water by looking at the shape of a flood, and never drive where they could get stuck. Few cars would be destroyed by the flood.
Loss of data
We’ve seen data networks fail. Cars need the data networks to get orders to travel to destinations to pick people up. They also want updates on road conditions, closures, problems and reallocations.
This is one of the few places where DSRC could help, as it would not depend on the mobile data networks. But it’s not enough to justify this rare purpose, and mesh networking is not currently in its design. It is probably more effective to build tools to keep the mobile data networks up, such as a network of emergency cell towers mounted in robotic trucks (and boats and planes?) that could travel quickly to provide limited service, for use by vehicles and for emergency communications only. Keep people to text messages and the networks have tons of capacity.
Existing cell towers could also be hardened, to have at least enough backup power for an evacuation, if not for a long disaster.
Roads changed by disasters
You can probably imagine 1,000 strange things that could happen during a disaster. Flooded streets. Trees and powerlines down. Buildings collapsed. Cracks in the road. Washed out bridges. Approaching tsunamis. High winds. Crashed cars.
The good thing is, if you can imagine it, so can the teams building test systems for robocars. They are building simulators to play out every strange situation they can think of, or that they’ve ever encountered in many human lifetimes of real driving on the road. Every car then gets tested to see what it will do in those strange situations, and 1,000 variations of each situation.
Cars could know the lay of the land, so that they could predict how deep water is or where flooding will go. They could know where the high ground is and how to get there without going over the low ground. If the data networks are up, they could get information in real time on road problems and disaster situations. One car might run into trouble, but after that, no other car will go that way if they shouldn’t. This even applies to traffic, something we already do with tools like Waze.
War
War is one of the most difficult challenges. Roads will be blocked or changed. Certainly places will be extremely dangerous and must not be visited. Checkpoints will be created that you must stop for. Communications networks will be compromised. Parties may be attempting to break into your car to take it over and turn it into a weapon against you or others. Insurgents may be modifying robocars and even ordinary drive-by-wire cars to turn them into bomb delivery systems. Cars or streets may come under active attack from snipers, artillery, grenade throwers and more. In the most extreme case, a nuclear weapon or chemical weapon might be used.
The military wants autonomous vehicles. It wants them to move cargo in dangerous but not active war zones, and it wants them for battle. It will be dealing with all these problems, but there is no clear path from their plans to civilian vehicles. Most civilian developers will not consider war situations very heavily until they start wanting to sell cars for use in conflict zones. At first the primarily solution will be to have a steering wheel to allow manual control. The second approach will be what I call “video game mode” where you can drive the car with a video game controller. It will take charge of not hitting things, you will just tell it where to go — what turns to make, what side of an obstacle to drive on, and most scare of all, to override its own sensors which believe it can’t go forward because of an obstacle.
In a conflict zone, communications will become very suspect and unreliable. No operations can depend on communications, and all communications should be seen as possible vectors for attack. At the same time you need data about places to avoid — and I mean really avoid. This problem needs lots more thought, and for now, I don’t know of people thinking about robotaxi service in conflict zones.
Race Avoidance in the Development of Artificial General Intelligence
Olga Afanasjeva, Jan Feyereisl, Marek Havrda, Martin Holec, Seán Ó hÉigeartaigh, Martin Poliak
SUMMARY
◦ Promising strides are being made in research towards artificial general intelligence systems. This progress might lead to an apparent winner-takes-all race for AGI.
◦ Concerns have been raised that such a race could create incentives to skimp on safety and to defy established agreements between key players.
◦ The AI Roadmap Institute held a workshop to begin interdisciplinary discussion on how to avoid scenarios where such dangerous race could occur.
◦ The focus was on scoping the problem, defining relevant actors, and visualizing possible scenarios of the AI race through example roadmaps.
◦ The workshop was the first step in preparation for the AI Race Avoidance round of the General AI Challenge that aims to tackle this difficult problem via citizen science and promote AI safety research beyond the boundaries of the small AI safety community.
Scoping the problem
With the advent of artificial intelligence (AI) in most areas of our lives, the stakes are increasingly becoming higher at every level. Investments into companies developing machine intelligence applications are reaching astronomical amounts. Despite the rather narrow focus of most existing AI technologies, the extreme competition is real and it directly impacts the distribution of researchers among research institutions and private enterprises.
With the goal of artificial general intelligence (AGI) in sight, the competition on many fronts will become acute with potentially severe consequences regarding the safety of AGI.
The first general AI system will be disruptive and transformative. First-mover advantage will be decisive in determining the winner of the race due to the expected exponential growth in capabilities of the system and subsequent difficulty of other parties to catch up. There is a chance that lengthy and tedious AI safety work ceases being a priority when the race is on. The risk of AI-related disaster increases when developers do not devote the attention and resources to safety of such a powerful system [1].
Once this Pandora’s box is opened, it will be hard to close. We have to act before this happens and hence the question we would like to address is:
How can we avoid general AI research becoming a race between researchers, developers and companies, where AI safety gets neglected in favor of faster deployment of powerful, but unsafe general AI?
Motivation for this post
As a community of AI developers, we should strive to avoid the AI race. Some work has been done on this topic in the past [1,2,3,4,5], but the problem is largely unsolved. We need to focus the efforts of the community to tackle this issue and avoid a potentially catastrophic scenario in which developers race towards the first general AI system while sacrificing safety of humankind and their own.
This post marks “step 0” that we have taken to tackle the issue. It summarizes the outcomes of a workshop held by the AI Roadmap Institute on 29th May 2017, at GoodAI head office in Prague, with the participation of Seán Ó hÉigeartaigh (CSER), Marek Havrda, Olga Afanasjeva, Martin Poliak (GoodAI), Martin Holec (KISK MUNI) and Jan Feyereisl (GoodAI & AI Roadmap Institute). We focused on scoping the problem, defining relevant actors, and visualizing possible scenarios of the AI race.
This workshop is the first in a series held by the AI Roadmap Institute in preparation for the AI Race Avoidance round of the General AI Challenge (described at the bottom of this page and planned to launch in late 2017). Posing the AI race avoidance problem as a worldwide challenge is a way to encourage the community to focus on solving this problem, explore this issue further and ignite interest in AI safety research.
By publishing the outcomes of this and the future workshops, and launching the challenge focused on AI race avoidance, we would like to promote AI safety research beyond the boundaries of the small AI safety community.
The issue should be subject to a wider public discourse, and should benefit from cross-disciplinary work of behavioral economists, psychologists, sociologists, policy makers, game theorists, security experts, and many more. We believe that transparency is essential part of solving many of the world’s direst problems and the AI race is no exception. This in turn may reduce regulation over-shooting and unreasonable political control that could hinder AI research.
Proposed line of thinking about the AI race: Example Roadmaps
One approach for starting to tackle the issue of AI race avoidance, and laying down the foundations for thorough discussion, is the creation of concrete roadmaps that outline possible scenarios of the future. Scenarios can be then compared, and mitigation strategies for negative futures can be suggested.
We used two simple methodologies for creating example roadmaps:
Methodology 1: a simple linear development of affairs is depicted by various shapes and colors representing the following notions: state of affairs, key actor, action, risk factor. The notions are grouped around each state of affairs in order to illustrate principal relevant actors, actions and risk factors.
Methodology 2: each node in a roadmap represents a state, and each link, or transition, represents a decision-driven action by one of the main actors (such as a company/AI developer, government, rogue actor, etc.)
During the workshop, a number of important issues were raised. For example, the need to distinguish different time-scales for which roadmaps can be created, and different viewpoints (good/bad scenario, different actor viewpoints, etc.)
Timescale issue
Roadmapping is frequently a subjective endeavor and hence multiple approaches towards building roadmaps exist. One of the first issues that was encountered during the workshop was with respect to time variance. A roadmap created with near-term milestones in mind will be significantly different from long-term roadmaps, nevertheless both timelines are interdependent. Rather than taking an explicit view on short-/long-term roadmaps, it might be beneficial considering these probabilistically. For example, what roadmap can be built, if there was a 25% chance of general AI being developed within the next 15 years and 75% chance of achieving this goal in 15–400 years?
Considering the AI race at different temporal scales is likely to bring about different aspects which should be focused on. For instance, each actor might anticipate different speed of reaching the first general AI system. This can have a significant impact on the creation of a roadmap and needs to be incorporated in a meaningful and robust way. For example, the Boy Who Cried Wolf situation can decrease the established trust between actors and weaken ties between developers, safety researchers, and investors. This in turn could result in the decrease of belief in developing the first general AI system at the appropriate time. For example, a low belief of fast AGI arrival could result in miscalculating the risks of unsafe AGI deployment by a rogue actor.
Furthermore, two apparent time “chunks” have been identified that also result in significantly different problems that need to be solved. Pre- and Post-AGI era, i.e. before the first general AI is developed, compared to the scenario after someone is in possession of such a technology.
In the workshop, the discussion focused primarily on the pre-AGI era as the AI race avoidance should be a preventative, rather than a curative effort. The first example roadmap (figure 1) presented here covers the pre-AGI era, while the second roadmap (figure 2), created by GoodAI prior to the workshop, focuses on the time around AGI creation.
Viewpoint issue
We have identified an extensive (but not exhaustive) list of actors that might participate in the AI race, actions taken by them and by others, as well as the environment in which the race takes place, and states in between which the entire process transitions. Table 1 outlines the identified constituents. Roadmapping the same problem from various viewpoints can help reveal new scenarios and risks.
Modelling and investigating decision dilemmas of various actors frequently led to the fact that cooperation could proliferate applications of AI safety measures and lessen the severity of race dynamics.
Cooperation issue
Cooperation among the many actors, and spirit of trust and cooperation in general, is likely to decrease the race dynamics in the overall system. Starting with a low-stake cooperation among different actors, such as talent co-development or collaboration among safety researchers and industry, should allow for incremental building of trust and better understanding of faced issues.
Active cooperation between safety experts and AI industry leaders, including cooperation between different AI developing companies on the questions of AI safety, for example, is likely to result in closer ties and in a positive information propagation up the chain, leading all the way to regulatory levels. Hands-on approach to safety research with working prototypes is likely to yield better results than theoretical-only argumentation.
One area that needs further investigation in this regard are forms of cooperation that might seem intuitive, but might rather reduce the safety of AI development [1].
Finding incentives to avoid the AI race
It is natural that any sensible developer would want to prevent their AI system from causing harm to its creator and humanity, whether it is a narrow AI or a general AI system. In case of a malignant actor, there is presumably a motivation at least not to harm themselves.
When considering various incentives for safety-focused development, we need to find a robust incentive (or a combination of such) that would push even unknown actors towards beneficial A(G)I, or at least an A(G)I that can be controlled [6].
Tying timescale and cooperation issues together
In order to prevent a negative scenario from happening, it should be beneficial to tie the different time-horizons (anticipated speed of AGI’s arrival) and cooperation together. Concrete problems in AI safety (interpretability, bias-avoidance, etc.) [7] are examples of practically relevant issues that need to be dealt with immediately and collectively. At the same time, the very same issues are related to the presumably longer horizon of AGI development. Pointing out such concerns can promote AI safety cooperation between various developers irrespective of their predicted horizon of AGI creation.
Forms of cooperation that maximize AI safety practice
Encouraging the AI community to discuss and attempt to solve issues such as AI race is necessary, however it might not be sufficient. We need to find better and stronger incentives to involve actors from a wider spectrum that go beyond actors traditionally associated with developing AI systems. Cooperation can be fostered through many scenarios, such as:
AI safety research is done openly and transparently,
Access to safety research is free and anonymous: anyone can be assisted and can draw upon the knowledge base without the need to disclose themselves or what they are working on, and without fear of losing a competitive edge (a kind of “AI safety helpline”),
Alliances are inclusive towards new members,
New members are allowed and encouraged to enter global cooperation programs and alliances gradually, which should foster robust trust building and minimize burden on all parties involved. An example of gradual inclusion in an alliance or a cooperation program is to start cooperating on low-stake issues from economic competition point of view, as noted above.
Closing remarks — continuing the work on AI race avoidance
In this post we have outlined our first steps on tackling the AI race. We welcome you to join in the discussion and help us to gradually come up with ways how to minimize the danger of converging to a state in which this could be an issue.
The AI Roadmap Institute will continue to work on AI race roadmapping, identifying further actors, recognizing yet unseen perspectives, time scales and horizons, and searching for risk mitigation scenarios. We will continue to organize workshops to discuss these ideas and publish roadmaps that we create. Eventually we will help build and launch the AI Race Avoidance round of the General AI Challenge. Our aim is to engage the wider research community and to provide it with a sound background to maximize the possibility of solving this difficult problem.
Stay tuned, or even better, join in now.
About the General AI Challenge and its AI Race Avoidance round
The General AI Challenge (Challenge for short) is a citizen science project organized by general artificial intelligence R&D company GoodAI. GoodAI provided a $5mil fund to be given out in prizes throughout various rounds of the multi-year Challenge. The goal of the Challenge is to incentivize talent to tackle crucial research problems in human-level AI development and to speed up the search for safe and beneficial general artificial intelligence.
The independent AI Roadmap Institute, founded by GoodAI, collaborates with a number of other organizations and researchers on various A(G)I related issues including AI safety. The Institute’s mission is to accelerate the search for safe human-level artificial intelligence by encouraging, studying, mapping and comparing roadmaps towards this goal. The AI Roadmap Institute is currently helping to define the second round of the Challenge, AI Race Avoidance, dealing with the question of AI race avoidance (set to launch in late 2017).
Participants of the second round of the Challenge will deliver analyses and/or solutions to the problem of AI race avoidance. Their submissions will be evaluated in a two-phase evaluation process: through a) expert acceptance and b) business acceptance. The winning submissions will receive monetary prizes, provided by GoodAI.
Expert acceptance
The expert jury prize will be awarded for an idea, concept, feasibility study, or preferably an actionable strategy that shows the most promise for ensuring safe development and avoiding rushed deployment of potentially unsafe A(G)I as a result of market and competition pressure.
Business acceptance
Industry leaders will be invited to evaluate top 10 submissions from the expert jury prize and possibly a few more submissions of their choice (these may include proposals which might have a potential for a significant breakthrough while lacking in feasibility criteria)
The business acceptance prize is a way to contribute to establishing a balance between the research and the business communities.
The proposals will be treated under an open licence and will be made public together with the names of their authors. Even in the absence of a “perfect” solution, the goal of this round of the General AI Challenge should be fulfilled by advancing the work on this topic and promoting interest in AI safety across disciplines.
While working for the global management consulting company Accenture, Gregory Falco discovered just how vulnerable the technologies underlying smart cities and the “internet of things” — everyday devices that are connected to the internet or a network — are to cyberterrorism attacks.
“What happened was, I was telling sheiks and government officials all around the world about how amazing the internet of things is and how it’s going to solve all their problems and solve sustainability issues and social problems,” Falco says. “And then they asked me, ‘Is it secure?’ I looked at the security guys and they said, ‘There’s no problem.’ And then I looked under the hood myself, and there was nothing going on there.”
Falco is currently transitioning into the third and final year of his PhD within the Department of Urban Studies and Planning (DUSP). Currently, his is carrying out his research at the Computer Science and Artificial Intelligence Laboratory (CSAIL). His focus is on cybersecurity for urban critical infrastructure, and the internet of things, or IoT, is at the center of his work. A washing machine, for example, that is connected to an app on its owner’s smartphone is considered part of the IoT. There are billions of IoT devices that don’t have traditional security software because they’re built with small amounts of memory and low-power processors. This makes these devices susceptible to cyberattacks and may provide a gate for hackers to breach other devices on the same network.
Falco’s concentration is on industrial controls and embedded systems such as automatic switches found in subway systems.
“If someone decides to figure out how to access a switch by hacking another access point that is communicating with that switch, then that subway is not going to stop, and people are going to die,” Falco says. “We rely on these systems for our life functions — critical infrastructure like electric grids, water grids, or transportation systems, but also our health care systems. Insulin pumps, for example, are now connected to your smartphone.”
Citing real-world examples, Falco notes that Russian hackers were able to take down the Ukrainian capital city’s electric grid, and that Iranian hackers interfered with the computer-guided controls of a small dam in Rye Brook, New York.
Falco aims to help combat potential cyberattacks through his research. One arm of his dissertation, which he is working on with renown negotiation Professor Lawrence Susskind, is aimed at conflict negotiation, and looks at how best to negotiate with cyberterrorists. Also, with CSAIL Principal Research Scientist Howard Shrobe, Falco seeks to determine the possibility of predicting which control-systems vulnerabilities could be exploited in critical urban infrastructure. The final branch of Falco’s dissertation is in collaboration with NASA’s Jet Propulsion Laboratory. He has secured a contract to develop an artificial intelligence-powered automated attack generator that can identify all the possible ways someone could hack and destroy NASA’s systems.
“What I really intend to do for my PhD is something that is actionable to the communities I’m working with,” Falco says. “I don’t want to publish something in a book that will sit on a shelf where nobody would read it.”
“Not science fiction anymore”
Falco’s battle against cyberterrorism has also lead him to co-found NeuroMesh, a startup dedicated to protecting IoT devices by using the same techniques hackers use.
“The concept of my startup is, ‘Let’s use hacker tools to defeat hackers,’” Falco says. “If you don’t know how to break it, you don’t know how to fix it.”
One tool hackers use is called a botnet. Once botnets get on a device, they often kill off other malware on the device so that they use all the processing power on the device for themselves. Botnets also play “king of the hill” on the device, and don’t let other botnets latch on.
NeuroMesh uses a botnet’s features against itself to create a good botnet. By re-engineering the botnet, programmers can use them to defeat any kind of malware that comes onto a device.
“The benefit is also that when you look at securing IoT devices with low memory and low processing power, it’s impossible to put any security on them, but these botnets have no problem getting on there because they are so small,” Falco says.
Much like a vaccine protects against diseases, NeuroMesh applies a cyber vaccine to protect industrial devices from cyberattacks. And, by leveraging the bitcoin blockchain to update devices, NeuroMesh further fortifies the security system to block other malware from attacking vital IoT devices.
Recently, Falco and his team pitched their botnet vaccine at MIT’s $100K Accelerate competition and placed second. Falco’s infant son was in the audience while Falco was presenting how NeuroMesh’s technology could secure a baby monitor, as an example, from being hacked. The startup advanced to MIT’s prestigious 100K Launch startup competition, where they finished among the top eight competitors. NeuroMesh is now further developing its technology with the help of a grant from the Department of Energy, working with Stuart Madnick, who is the John Norris Maguire Professor at MIT, and Michael Siegel, a principal research scientist at MIT’s Sloan School of Management.
“Enemies are here. They are on our turf and in our wires. It’s not science fiction anymore,” Falco says. “We’re protecting against this. That’s what NeuroMesh is meant to do.”
The human tornado
Falco’s abundant energy has led his family to call him “the tornado.”
“One-fourth of my mind is on my startup, one-fourth on finishing my dissertation, and other half is on my 11-month-old because he comes with me when my wife works,” Falco says. “He comes to all our venture capital meetings and my presentations. He’s always around and he’s generally very good.”
As a high school student, Falco’s energy and excitement for engineering drove him to discover a new physics wave theory. Applying this to the tennis racket, he invented a new, control-enhanced method of stringing, with which he won various science competitions (and tennis matches). He used this knowledge to start a small business for stringing rackets. The thrill of business took him on a path to Cornell University’s School of Hotel Administration. After graduating early, Falco transitioned into the field of sustainability technology and energy systems, and returned to his engineering roots by earning his LEED AP (Leadership in Energy and Environmental Design) accreditation and a master’s degree in sustainability management from Columbia University.
His excitement followed him to Accenture, where he founded the smart cities division and eventually learned about the vulnerability of IoT devices. For the past three years, Falco has also been sharing his newfound knowledge about sustainability and computer science as an adjunct professor at Columbia University.
“My challenge is always to find these interdisciplinary holes because my background is so messed up. You can’t say, this guy is a computer scientist or he’s a business person or an environmental scientist because I’m all over the place,” he says.
That’s part of the reason why Falco enjoys taking care of his son, Milo, so much.
“He’s the most awesome thing ever. I see him learning and it’s really amazing,” Falco says. “Spending so much time with him is very fun. He does things that my wife gets frustrated at because he’s a ball of energy and all over the place — just like me.”