Page 383 of 397
1 381 382 383 384 385 397

Automatic code reuse

“CodeCarbonCopy enables one of the holy grails of software engineering: automatic code reuse,” says Stelios Sidiroglou-Douskos, a research scientist at CSAIL. Credit: MIT News

by Larry Hardesty

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system that allows programmers to transplant code from one program into another. The programmer can select the code from one program and an insertion point in a second program, and the system will automatically make modifications necessary — such as changing variable names — to integrate the code into its new context.

Crucially, the system is able to translate between “data representations” used by the donor and recipient programs. An image-processing program, for instance, needs to be able to handle files in a range of formats, such as jpeg, tiff, or png. But internally, it will represent all such images using a single standardized scheme. Different programs, however, may use different internal schemes. The CSAIL researchers’ system automatically maps the donor program’s scheme onto that of the recipient, to import code seamlessly.

The researchers presented the new system, dubbed CodeCarbonCopy, at the Association for Computing Machinery’s Symposium on the Foundations of Software Engineering.

“CodeCarbonCopy enables one of the holy grails of software engineering: automatic code reuse,” says Stelios Sidiroglou-Douskos, a research scientist at CSAIL and first author on the paper. “It’s another step toward automating the human away from the development cycle. Our view is that perhaps we have written most of the software that we’ll ever need — we now just need to reuse it.”

The researchers conducted eight experiments in which they used CodeCarbonCopy to transplant code between six popular open-source image-processing programs. Seven of the eight transplants were successful, with the recipient program properly executing the new functionality.

Joining Sidiroglou-Douskos on the paper are Martin Rinard, a professor of electrical engineering and computer science; Fan Long, an MIT graduate student in electrical engineering and computer science; and Eric Lahtinen and Anthony Eden, who were contract programmers at MIT when the work was done.

Mutatis mutandis

With CodeCarbonCopy, the first step in transplanting code from one program to another is to feed both of them the same input file. The system then compares how the two programs process the file.

If, for instance, the donor program performs a series of operations on a particular piece of data and loads the result into a variable named “mem_clip->width,” and the recipient performs the same operations on the same piece of data and loads the result into a variable named “picture.width,” the system will infer that the variables are playing the same roles in their respective programs.

Once it has identified correspondences between variables, CodeCarbonCopy presents them to the user. It also presents all the variables in the donor for which it could not find matches in the recipient, together with those variables’ initial definitions. Frequently, those variables are playing some role in the donor that’s irrelevant to the recipient. The user can flag those variables as unnecessary, and CodeCarbonCopy will automatically excise any operations that make use of them from the transplanted code.

New order

To map the data representations from one program onto those of the other, CodeCarbonCopy looks at the precise values that both programs store in memory. Every pixel in a digital image, for instance, is governed by three color values: red, green, and blue. Some programs, however, store those triplets of values in the order red, green, blue, and others store them in the order blue, green, red.

If CodeCarbonCopy finds a systematic relationship between the values stored by one program and those stored by the other, it generates a set of operations for translating between representations.

CodeCarbonCopy works well with file formats, such as images, whose data is rigidly organized, and with programs, such as image processors, that store data representations in arrays, which are essentially rows of identically sized memory units. In ongoing work, the researchers are looking to generalize their approach to file formats that permit more flexible data organization and programs that use data structures other than arrays, such as trees or linked lists.

“In general, code quoting is where a lot of problems in software come from,” says Vitaly Shmatikov, a professor of computer science at Cornell Tech, a joint academic venture between Cornell University and Israel’s Technion. “Both bugs and security vulnerabilities — a lot of them occur when there is functionality in one place, and someone tries to either cut and paste or reimplement this functionality in another place. They make a small mistake, and that’s how things break. So having an automated way of moving code from one place to another would be a huge, huge deal, and this is a very solid step toward having it.”

“Recognizing irrelevant code that’s not important for the functionality that they’re quoting, that’s another technical innovation that’s important,” Shmatikov adds. “That’s the kind of thing that was an obstacle for a lot of previous approaches — that you know the right code is there, but it’s mixed up with a lot of code that is not relevant to what you’re trying to do. So being able to separate that out is a fairly significant technical contribution.”

Descartes revisited: Do robots think?

This past week, a robotic first happened: ABB’s Yumi robot conducted the Lucca Philharmonic Orchestra in Pisa, Italy. The dual-armed robot overshadowed even his vocal collaborator, Italian tenor Andrea Bocelli. While many will try to hype the performance as ushering in a new new era of mechanical musicians, Yumi’s artistic career was short-lived as it was part of the opening ceremonies of Italy’s First International Festival of Robotics.

Italian conductor Andrea Colombini said of his student, “The gestural nuances of a conductor have been fully reproduced at a level that was previously unthinkable to me. This is an incredible step forward, given the rigidity of gestures by previous robots. I imagine the robot could serve as an aid, perhaps to execute, in the absence of a conductor, the first rehearsal, before the director steps in to make the adjustments that result in the material and artistic interpretation of a work of music.”

Harold Cohen with his robot AARON

Yumi is not the first computer artist. In 1973, professor and artist, Harold Cohen created a software program called AARON – a mechanical painter. AARON’s works have been exhibited worldwide, including at the prestigious Venetian Biennale. Following Cohen’s lead, Dr Simon Colton of London’s Imperial College created “The Painting Fool,” with works on display in Paris’ prestigious Galerie Oberkampf in 2013. Colton wanted to test if he could cross the emotional threshold with an artistic Turning Test. Colton explained, “I realized that the Painting Fool was a very good mechanism for testing out all sorts of theories, such as what it means for software to be creative. The aim of the project is for the software itself to be taken seriously as a creative artist in its own right, one day.”

In June 2015, Google’s Brain AI research team took artistic theory to the next level by infusing its software with the ability to create a remarkably human-like quality of imagination. To do this, Google’s programmers took a cue from one of the most famous masters of all time, Leonardo da Vinci. Da Vinci suggested that aspiring artists should start by looking at stains or marks on walls to create visual fantasies. Google’s neural net did just that, translating the layers of the image into spots and blotches with new stylized painterly features (see examples below).

1) Google uploaded a photograph of a standard Southwestern scene:

2) The computer then translated the layers as below:

In describing his creation, Google Brain senior scientist Douglas Eck said this past March, “I don’t think that machines themselves just making art for art’s sake is as interesting as you might think. The question to ask is, can machines help us make a new kind of art?” The goal of Eck’s platform called Magenta is to enable laypeople (without talent) to design new kinds of music and art, similar to synthetic keyboards, drums and camera filters. Dr. Eck himself is an admittedly frustrated failed musician who hopes that Magenta will revolutionize the arts in the same way as the electric guitar. “The fun is in finding new ways to break it and extend it,” Eck said excitedly.

The artistic development and growth of these computer programs is remarkable. Cohen, who passed away last year, said in a 2010 lecture regarding AARON “with no further input from me, it can generate unlimited numbers of images, it’s a much better colorist than I ever was myself, and it typically does it all while I’m tucked up in bed.” Feeling proud, he later corrected himself, “Well, of course, I wrote the program. It isn’t quite right to say that the program simply follows the rules I gave it. The program is the rules.”

In reflecting on the societal implications of creative bots, one can not help to be reminded of the famous statement by philosopher René Decartes: “I think, therefore I am.” Challenging this idea for the robotic age, Professor Arai Noriko tested the thinking capabilities of robots. Noriko led a research team in 2011 at Japan’s National Institute of Informatics to build an artificial intelligence program smart enough to pass the  rigorous entrance exam of the University of Tokyo.

“Passing the exam is not really an important research issue, but setting a concrete goal is useful. We can compare the current state-of-the-art AI technology with 18-year-old students,” explained Dr. Noriko. The original goal set out by Noriko’steam was for the Todai robot (named for the University) to be admitted to college by 2021. At a Ted conference earlier this year, Noriko shocked the audience by revealing the news that Todai beat 80% of the students taking the exam, which consisted of seven sections, including math, English, science, and even a 600-word essay. Rather than celebrating, Noriko shared with the crowd her fear,”I was alarmed.”

Todai is able to search and process an immense amount of data, but unlike humans it does not read, even with 15 billion sentences already in its neural network. Noriko reminds us that “humans excel at pattern recognition, creative projects, and problem solving. We can read and understand.” However, she is deeply concerned that modern educational systems are more focused on facts and figures than creative reasoning, especially because humans could never compete with the fact-checking of an AI. Noriko pointed to the entrance exam as an example, the Todai robot failed to grasp a multiple choice question that would have been obvious even to young children. She tested her thesis at a local middle school, and was dumfounded when one-third of students couldn’t even “answer a simple reading comprehension question.” She concluded that in order for humans to compete with robots, “We have to think about a new type of education. ”

Cohen also wrestled with the question of a thinking robot and whether his computer program could ever have the emotional impact of a human artist like Monet or Picasso. In his words, to reach that kind of level a machine would have to “develop a sense of self.” Cohen professed that “if it doesn’t, it means that machines will never be creative in the same sense that humans are creative.” Cohen later qualified his remarks about robotic creativity, adding, “it doesn’t mean that machines have no part to play with respect to creativity.”

Noriko is much more to the point, “How we humans will coexist with AI is something we have to think about carefully, based on solid evidence. At the same time, we have to think in a hurry because time is running out.” John Cryan, CEO of Deutsche Bank, echoed Noriko’s sentiment at a banking conference last week. Cryan said “In our banks we have people behaving like robots doing mechanical things, tomorrow we’re going to have robots behaving like people. We have to find new ways of employing people and maybe people need to find new ways of spending their time.”

iRobot on the defensive

SharkNinja, a well-known marketer of home consumer products, has entered the American robotic vacuum market with a product that is priced to compete against iRobot’s Roomba line of floor cleaners. Their new ION Robot navigates floors and carpets and docks and recharges automatically. It sells at a very favorable price point to iRobot’s.

SharkNinja has partnered with ECOVACS, a Chinese manufacturer of many robotic products including robotic vacuums and floor cleaners, to custom manufacture the new Shark ION Robot – thus SharkNinja isn’t starting from scratch. [ECOVACS is a big seller in China. On Singles Day (11/11/2016), online via the e-commerce giant Alibaba, ECOVACS sold $60.2 million of robotic products, up from $47.6 million in 2015. The star performer was a DEEBOT robotic vacuum which sold 135,000 units. The ECOVACS window-cleaning robot was another standout product, with more than 10,000 units sold.]

iRobot’s stock took an 18% negative hit – perhaps on the news of the product launch by SharkNinja, or perhaps because some prominent analysts downgraded their ratings of the company saying that iRobot is susceptible to a lower-cost similarly capable well-regarded branded product. The SharkNinja robotic vacuums fits those criteria.

SharkNinja is a fast-growing vendor of blenders, vacuums and other household products. They displaced Dyson in vacuums by engineering a superior product at a value price point (the Dyson robot vacuum sold for $1,000). SharkNinja, by using disruptive pricing and infomercial marketing, has garnered around 20% of the U.S. market for vacuums in just 10 years. SharkNinja’s non-robotic vacuums and blenders command significant shelf space and are very popular with customers and sellers alike. Thus they are a formidable competitor.

Also this month, SharkNinja raised an undisclosed sum from CDH Investments, a private equity fund with $20 billion of assets under management. CDH said they purchased “a significant equity interest” in SharkNinja.

iRobot’s Defensive Moves

iRobot has been making defensive moves recently. It acquired its two main distributors: Robopolis in Europe and Sales on Demand in Japan. It has used up much of its cash reserve to buy back shares of the company. And it sued what it considered to be patent violations by Hoover, Black & Decker, Bobsweep, Bissell Homecare, and Micro-Star International (MSI) (which manufacturers the Hoover and Black & Decker vacuums).

According to Zacks Equity Research, iRobot just favorably settled with MSI in an agreement where MSI will exit the global robotic cleaning industry and also provide a undisclosed compensation fee to iRobot.

“This settlement represents the first successful milestone on the enforcement effort iRobot initiated earlier this year,” said Glen Weinstein, executive vice president and chief legal officer at iRobot. “The agreement by MSI to exit the robotic cleaning industry signifies the value of iRobot’s intellectual property and the company’s efforts to protect it.”

Nevertheless, iRobot may be vulnerable to an international consumer products company with a full range of consumer products who competes with similar products at lower prices.

Robots Podcast #242: Disney Robotics, with Katsu Yamane



In this episode, Audrow Nash interviews Katsu Yamane, Senior Research Scientist at Disney, about robotics in Disney. Yamane discusses Disney’s history with robots, how Disney currently uses Robots, how designing robots at Disney is different than in academia or industry, a realistic robot simulator used by Disney’s animators, and on becoming a Disney Research “Imagineer.”

Katsu Yamane

Katsu received his PhD in mechanical engineering from University of Tokyo in 2002. Following postdoctoral work at Carnegie Mellon University from 2002 to 2003, he was a faculty member at University of Tokyo until he joined Disney Research, Pittsburgh, in October 2008. His main research area is humanoid robot control and motion synthesis, in particular methods involving human motion data and dynamic balancing. He is also interested in developing algorithms for creating character animation. He has always been fascinated by the way humans control their bodies, which led him to the research on biomechanical human modeling and simulation to understand human sensation and motor control.

 

 

Links

A foldable cargo drone

The field of drone delivery is currently a big topic in robotics. However, the reason that your internet shopping doesn’t yet arrive via drone is that current flying robots can prove a safety risk to people and are difficult to transport and store.

A team from the Floreano LabNCCR Robotics and EPFL present a new type of cargo drone that is inspired by origami, is lightweight and easily manoeuvrable and uses a foldaway cage to ensure safety and transportability.

A foldable protective cage sits around a multicopter and around the package to be carried, shielding spinning propellers and ensuring safety of all people around it. When the folding cage is opened in order to either load or unload the drone, a safety mechanism ensures that the engine is cut off, meaning that safety is ensured, even with completely untrained users.


But where this drone takes a step forward is in the folding cage, ensuring that it can be easily stowed away and transported. The team took inspiration from the origami folding shelters that have been developed for space exploration and adapted them to create a chinese lantern shape, and instead of using paper, a skeletal structure is created using carbon fibre tubes and 3D printed flexible joints. The cage is opened and closed using a joint mechanism on the top and bottom and pushing apart the resulting gap – in fact, both opening and closing of the cage and be performed in just 1.2 seconds.

By adding such a cage to a multicopter, the team ensure safety for those who come into contact with the drone. The drone can be caught while it’s flying, meaning that it can deliver to people caught in places where landing is hard or even impossible, such as a collapsed building during search and rescue missions, where first aid, medication, water or food may need to be delivered quickly.

Currently, the drone is able to carry 0.5 kg cargo for 2 km, and any visitors to EPFL over this summer would have noticed it being used to transport small items across campus 150 times, but it is hoped that by scaling, it may be able to carry as much as 2 kg over 15 km, a weight and distance that would allow for longer distance deliveries.

Reference:
P.M. Kornatowski, S. Mintchev, and D. Floreano, “An origami-inspired cargo drone”, in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017.

Udacity Robotics video series: Interview with Abdelrahman Elogeel from Amazon Robotics


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Abdelrahman Elogeel. Abdelrahman is a Software Development Engineer in the Core Machine Learning team at Amazon Robotics. His work includes bringing state-of-the-art machine learning techniques to tackle various problems for robots at Amazon’s robotic fulfilment centers.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

New NHTSA Robocar regulations are a major, but positive, reversal

NHTSA released their latest draft robocar regulations just a week after the U.S. House passed a new regulatory regime and the senate started working on its own. The proposed regulations preempt state regulation of vehicle design, and allow companies to apply for high volume exemptions from the standards that exist for human-driven cars.

It’s clear that the new approach will be quite different from the Obama-era one, much more hands-off. There are not a lot of things to like about the Trump administration but this could be one of them. The prior regulations reached 116 pages with much detail, though they were mostly listed as “voluntary.” I wrote a long critique of the regulations in a 4 part series which can be found in my NHTSA tag. They seem to have paid attention to that commentary and the similar commentary of others.

At 26 pages, the new report is much more modest, and actually says very little. Indeed, I could sum it up as follows:

  • Do the stuff you’re already doing
  • Pay attention to where and when your car can drive and document that
  • Document your processes internally and for the public
  • Go to the existing standards bodies (SAE, ISO etc.) for guidance
  • Create a standard data format for your incident logs
  • Don’t forget all the work on crash avoidance, survival and post-crash safety in modern cars that we worked very hard on
  • Plans for how states and the feds will work together on regulating this

Goals vs. Approaches

The document does a better job at understanding the difference between goals — public goods that it is the government’s role to promote — and approaches to those goals, which should be entirely the province of industry.

The new document is much more explicit that the 12 “safety design elements” are voluntary. I continue to believe that there is a risk they may not be truly voluntary, as there will be great pressure to conform with them, and possible increased liability for those who don’t, but the new document tries to avoid that, and its requests are much milder.

The document understands the important realization that developers in this space will be creating new paths to safety and establishing new and different concepts of best practices. Existing standards have value, but they can at best encode conventional wisdom. Robocars will not be created using conventional wisdom. The new document takes the approach of more likely recommending that the existing standards be considered, which is a reasonable plan.

A lightweight regulatory philosophy

My own analysis is guided by a lightweight regulatory approach which has been the norm until now. The government’s role is to determine important public goals and interests, and to use regulations and enforcement when, and only when, it becomes clear that industry can’t be trusted to meet these goals on its own.

In particular, the government should very rarely regulate how something should be done, and focus instead on what needs to happen as the end result, and why. In the past, all automotive safety technologies were developed by vendors and deployed, sometimes for decades, before they were regulated. When they were regulated, it was more along the lines of “All cars should now have anti-lock brakes.” Only with the more mature technologies have the regulations had to go into detail on how to build them.

Worthwhile public goals include safety, of course, and the promotion of innovation. We want to encourage both competition and cooperation in the right places. We want to protect consumer rights and privacy. (The prior regulations proposed a mandatory sharing of incident data which is watered down greatly in these new regulations.)

I call this lightweight because others have called for a great deal more regulation. I don’t, however, view it is highly laissez-faire. Driving is already highly regulated, and the idea that regulators would need to write rules to prevent companies from doing things they have shown no evidence of doing seems odd to me. Particularly in a fast-changing field where regulators (and even developers) admit they have limited knowledge of what the technology’s final form will actually be.

Stating the obvious

While I laud the reduction of detail in these regulations, it’s worth pointing out that many of the remaining sections are stripped to the point of mostly outlining “motherhood” requirements — requirements which are obvious and that every developer has known for some time. You don’t have to say that the vehicle should follow the vehicle code and not hit other cars. Anybody who needs to be told that is not a robocar developer. The set of obvious goals belongs better in a non-governmental advice document (which this does in fact declare itself in part to be, though of course governmental) than in something considered regulatory.

Overstating the obvious and discouraging the “black box.”

Sometimes a statement can be both obvious but also possibly wrong in the light of new technology. The document has many requirements that vendors document their thinking and processes which may be very difficult to do with systems built with machine learning. Machine learning sometimes produces a “black box” that works, but there is minimal knowledge as to how it works. It may be that such systems will outperform other systems, leaving us with the dilemma of choosing between a superior system we can’t document and understand, and an inferior one we can.

There is a new research area known as “explainable AI” which hopes to bridge this gap and make it possible to document and understand why machine learning systems operate as they do. This is promising research but it may never be complete. In spite of this, EU regulations currently are already attempting to forbid unexplainable AI. This may cut off very productive avenues of development — we don’t know enough to be sure about this as yet.

Some minor notes

The name

The new report pushes a new term — Automated Driving Systems. It seems every iteration comes up with a new name. The field is really starting to need a name people agree on, since nobody seems to much like driverless cars, self-driving cars, autonomous vehicles, automated vehicles, robocars or any of the others. This one is just as unwieldy, and its acronym is an English word and thus hard to search for.

The levels

The SAE levels continue to be used. I have been critical of the levels before, recently in this satire. It is wrong to try to understand robocars primarily through the role of humans in their operation, and wrong to suggest there is a progression of levels based on that.

The 12 safety elements

As noted, most of the sections simply advise obvious policies which everybody is already doing, and advise that teams document what they are doing.

1. System Safety

This section is modest, and describes fairly common existing practices for high reliability software systems. (Almost to the point that there is no real need for the government to point them out.)

2. Operational Design Domain

The idea of defining the situations where the car can do certain things is a much better approach than imagining levels of human involvement. I would even suggest it replace the levels, and the human seen simply as one of the tools to be used to operate outside of certain domains. Still, I see minimal need for NHTSA to say this — everybody already knows that roads and their conditions are different and complex and need different classes of technology.

3. Object and Event Detection and Response, 4. Fallback, 5. Validation, 6. HMI

Again, this is fairly redundant. Vendors don’t need to be told that vehicles must obey the vehicle code and stay in their lane and not hit things. That’s already the law. They know that only with a fallback strategy can they approach the reliability needed.

7. Computer Security

While everything here is already on the minds of developers, I don’t fault the reminder here because traditional automakers have a history of having done security badly. The call for a central clearing house on attacks is good, though it should not necessarily be Auto-ISAC.

8. Occupant Protection

A great deal of the current FMVSS (Federal Motor Vehicle Safety Standards) are about this, and because many vehicles may use exemptions from FMVSS to get going, a reminder about this is in order.

10. Data Recording

The most interesting proposal in the prior document was a requirement for public sharing of incident and crash data so that all teams could learn from every problem any team encounters. This would speed up development and improve safety, but vendors don’t like the fact it removes a key competitive edge — their corpus of driving experience.

The new document calls for a standard data format, and makes general motherhood calls for storing data in a crash, something everybody already does.

The call for a standard is actually difficult. Every vehicle has a different sensor suite and its own tools to examine the sensor data. Trying to standardize that on a truly useful level is a serious task. I had expected this task to fall to outside testing companies, who would learn (possibly reverse engineering) the data formats of each car and try to put them in a standard format that was actually useful. I fear a standard agreed upon by major players (who don’t want to share their data) will be minimal and less useful.

State Roles

A large section of the document is about the bureaucratic distribution of roles between states and federal bodies. I will provide analysis of this later.

Conclusion

This document reflects a major change, almost a reversal, and largely a positive one. Going forward from here, I would encourage that the debate on regulation focus on

  • What public goods does the government have an interest in protecting?
  • Which ones are vendors showing they can’t be trusted to support voluntarily, both by present actions and past history?
  • How can innovation be encouraged and facilitated, and good communication be made to the public about what’s going on

One of the key public goods missing from this document is privacy protection. This is one of the areas where vendors don’t have a great past history.
Another one is civil rights protection — for example what powers police will want over cars — where the government has a bad history.

Rodney Brooks on the future of robotics and AI

If you follow the robotics community on the twittersphere, you’ll have noticed that Rodney Brooks is publishing a series of essays on the future of robotics and AI which has been gathering wide attention.

His articles are designed to be read as stand alone essays, and in any order. Robohub will be featuring links to the articles as they come out over the next 6 months or so. They are worth the read.

The Seven Deadly Sins of Predicting the Future of AI published on September 7, 2017.

Domo Arigato Mr. Roboto published on August 28, 2017.

Machine Learning Explained published on August 28, 2017.

“Peel-and-go” printable structures fold themselves

A new method produces a printable structure that begins to fold itself up as soon as it’s peeled off the printing platform. Credit: MIT
by Larry Hardesty

As 3-D printing has become a mainstream technology, industry and academic researchers have been investigating printable structures that will fold themselves into useful three-dimensional shapes when heated or immersed in water.

In a paper appearing in the American Chemical Society’s journal Applied Materials and Interfaces, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and colleagues report something new: a printable structure that begins to fold itself up as soon as it’s peeled off the printing platform.

One of the big advantages of devices that self-fold without any outside stimulus, the researchers say, is that they can involve a wider range of materials and more delicate structures.

“If you want to add printed electronics, you’re generally going to be using some organic materials, because a majority of printed electronics rely on them,” says Subramanian Sundaram, an MIT graduate student in electrical engineering and computer science and first author on the paper. “These materials are often very, very sensitive to moisture and temperature. So if you have these electronics and parts, and you want to initiate folds in them, you wouldn’t want to dunk them in water or heat them, because then your electronics are going to degrade.”

To illustrate this idea, the researchers built a prototype self-folding printable device that includes electrical leads and a polymer “pixel” that changes from transparent to opaque when a voltage is applied to it. The device, which is a variation on the “printable goldbug” that Sundaram and his colleagues announced earlier this year, starts out looking something like the letter “H.” But each of the legs of the H folds itself in two different directions, producing a tabletop shape.

The researchers also built several different versions of the same basic hinge design, which show that they can control the precise angle at which a joint folds. In tests, they forcibly straightened the hinges by attaching them to a weight, but when the weight was removed, the hinges resumed their original folds.

In the short term, the technique could enable the custom manufacture of sensors, displays, or antennas whose functionality depends on their three-dimensional shape. Longer term, the researchers envision the possibility of printable robots.

Sundaram is joined on the paper by his advisor, Wojciech Matusik, an associate professor of electrical engineering and computer science (EECS) at MIT; Marc Baldo, also an associate professor of EECS, who specializes in organic electronics; David Kim, a technical assistant in Matusik’s Computational Fabrication Group; and Ryan Hayward, a professor of polymer science and engineering at the University of Massachusetts at Amherst.

This clip shows an example of an accelerated fold. (Image: Tom Buehler/CSAIL)

Stress relief

The key to the researchers’ design is a new printer-ink material that expands after it solidifies, which is unusual. Most printer-ink materials contract slightly as they solidify, a technical limitation that designers frequently have to work around.

Printed devices are built up in layers, and in their prototypes the MIT researchers deposit their expanding material at precise locations in either the top or bottom few layers. The bottom layer adheres slightly to the printer platform, and that adhesion is enough to hold the device flat as the layers are built up. But as soon as the finished device is peeled off the platform, the joints made from the new material begin to expand, bending the device in the opposite direction.

Like many technological breakthroughs, the CSAIL researchers’ discovery of the material was an accident. Most of the printer materials used by Matusik’s Computational Fabrication Group are combinations of polymers, long molecules that consist of chainlike repetitions of single molecular components, or monomers. Mixing these components is one method for creating printer inks with specific physical properties.

While trying to develop an ink that yielded more flexible printed components, the CSAIL researchers inadvertently hit upon one that expanded slightly after it hardened. They immediately recognized the potential utility of expanding polymers and began experimenting with modifications of the mixture, until they arrived at a recipe that let them build joints that would expand enough to fold a printed device in half.

Whys and wherefores

Hayward’s contribution to the paper was to help the MIT team explain the material’s expansion. The ink that produces the most forceful expansion includes several long molecular chains and one much shorter chain, made up of the monomer isooctyl acrylate. When a layer of the ink is exposed to ultraviolet light — or “cured,” a process commonly used in 3-D printing to harden materials deposited as liquids — the long chains connect to each other, producing a rigid thicket of tangled molecules.

When another layer of the material is deposited on top of the first, the small chains of isooctyl acrylate in the top, liquid layer sink down into the lower, more rigid layer. There, they interact with the longer chains to exert an expansive force, which the adhesion to the printing platform temporarily resists.

The researchers hope that a better theoretical understanding of the reason for the material’s expansion will enable them to design material tailored to specific applications — including materials that resist the 1–3 percent contraction typical of many printed polymers after curing.

“This work is exciting because it provides a way to create functional electronics on 3-D objects,” says Michael Dickey, a professor of chemical engineering at North Carolina State University. “Typically, electronic processing is done in a planar, 2-D fashion and thus needs a flat surface. The work here provides a route to create electronics using more conventional planar techniques on a 2-D surface and then transform them into a 3-D shape, while retaining the function of the electronics. The transformation happens by a clever trick to build stress into the materials during printing.”

NTSB Tesla crash report and new NHTSA regulations to come

Tesla Motors autopilot (photo:Tesla)

The NTSB (National Transportation Safety Board) has released a preliminary report on the fatal Tesla crash with the full report expected later this week. The report is much less favourable to autopilots than their earlier evaluation.

(This is a giant news day for Robocars. Today NHTSA also released their new draft robocar regulations which appear to be much simpler than the earlier 116 page document that I was very critical of last year. It’s a busy day, so I will be posting a more detailed evaluation of the new regulations — and the proposed new robocar laws from the House — later in the week.)

The earlier NTSB report indicated that though the autopilot had its flaws, overall the system was working. This is to say that though drivers were misusing the autopilot, the combined system including drivers not misusing the autopilot combined with those who did, was overall safer than drivers with no autopilot. The new report makes it clear that this does not excuse the autopilot being so easy to abuse. (By abuse, I mean ignore the warnings and treat it like a robocar, letting it driving you without actively monitoring the road, ready to take control.)


While the report mostly faults the truck driver for turning at the wrong time, it blames Tesla for not doing a good enough job to assure that the driver is not abusing the autopilot. Tesla makes you touch the wheel every so often, but NTSB notes that it is possible to touch the wheel without actually looking at the road. NTSB also is concerned that the autopilot can operate in this fashion even on roads it was not designed for. They note that Tesla has improved some of these things since the accident.

This means that “touch the wheel” systems will probably not be considered acceptable in the future, and there will have to be some means of assuring the driver is really paying attention. Some vendors have decided to put in cameras that watch the driver or in particular the driver’s eyes to check for attention. After the Tesla accident, I proposed a system which tested driver attention from time to time and punished them if they were not paying attention which could do the job without adding new hardware.

It also seems that autopilot cars will need to have maps of what roads they work on and which they don’t, and limit features based on the type of road you’re on.

Reprogramming nature

Credit: Draper

Summer is not without its annoyances — mosquitos, wasps, and ants, to name a few. As the cool breeze of September pushes us back to work, labs across the country are reconvening tackling nature’s hardest problems. Sometimes forces that seem diametrically opposed come together in beautiful ways, like robotics infused into living organisms.

This past summer, researchers at Harvard and Arizona State University collaborated on successfully turning living E. Coli bacteria into a cellular robot, called a “ribocomputer.” By taking archived footage of movies, the Harvard scientists were able to successfully store the digital content on the bacteria that is most famous for making Chipotle customers violently ill. According to Seth Shipman, lead researcher at Harvard, this was the first time anyone has archived data onto a living organism.

In responding to the original article published in July in Nature, Julius Lucks, a bioengineer at Northwestern University, said that Shipman’s discovery will enable wider exploitation of DNA encoding. “What these papers represent is just how good we are getting at harnessing that power,” explained Lucks. The key to the discovery was Shipman’s ability to disguise the movie pixels into DNA’s four letter code: “molecules represented by the letters A,T,G and C—and synthesized that DNA. But instead of generating one long strand of code, they arranged it, along with other genetic elements, into short segments that looked like fragments of viral DNA.” Another important factor was E.coli‘s natural ability “to grab errant pieces of viral DNA and store them in its own genome—a way of keeping a chronological record of invaders. So when the researchers introduced the pieces of movie-turned-synthetic DNA—disguised as viral DNA—E. coli’s molecular machinery grabbed them and filed them away.”

Shipman used this methodology to eventually turn the cells into a computer that not only stores data, but actually perform logic-based decisions. Partnering with Alexander Green at Arizona State University’s Biodesign Institute, the two institutions collaborated on building their ribocomoputer which programmed bacteria with ribonucleic acid or RNA. According to Green, the “ribocomputer can evaluate up to a dozen inputs, make logic-based decisions using AND, OR, and NOT operations, and give the cell commands.” Green stated that this is the most complex biological computer created on a living cell to date.  The discovery by Green and Shipman means that cells could now be programmed to self-destruct if they sense the presence of cancer markers, or even heal the body from within by attacking foreign toxins.

Timothy Lu of MIT, called the discovery the beginning of the “golden age of circuit design.” Lu further said “The way that electrical engineers have gone about establishing design hierarchy or abstraction layers — I think that’s going to be really important for biology. ” In a recent IEEE article, Lucks cautioned readers about the discovery of perverting nature which can ultimately lead to a host of ethical considerations, “I don’t think anybody would really argue that it’s unethical to do this in E. coli. But as you go up in the chain [of living organisms], it gets more interesting from an ethical point of view.”

Nature has the inspiration for numerous discoveries in modern robotics, and has even created its own field of biomimicry. However, manipulating living organisms according to the whims of humans is just beginning to take shape. A couple of years ago, Hong Liang, a researcher at Texas A&M University, outfitted a cockroach with 3g backpack-like device that had a microprocessor, lithium battery, camera sensor, and electrical/nerve control system. Liang then used her make-shift insect robo-suit to remotely drive the waterbug through a maze.

When asked by the Guardian, what prompted Laing to utilize bugs as robots, she explained, “Insects can do things a robot cannot. They can go into small places, sense the environment, and if there’s movement, from a predator say, they can escape much better than a system designed by a human. We wanted to find ways to work with them.”

A cockroach outfitted with front and rear electrodes as well as a “backpack” for wireless control.
Credit: Alper Bozkurt, North Carolina State University

Liang believes that robo-roaches could be especially useful in disaster recovery situations that maximize the size of the insect along with its endurance. Liang says that some cockroaches can carry five times their own bodyweight, but the heavier the load, the greater the toll it takes on their performance. “We did an endurance test and they do get tired,” Liang explained. “We put them on a treadmill for a minute and then let them rest. If the backpack is lighter, they can go on for longer.” Laing has inspired other labs to work with different species of insects.

Draper, the US defense contractor, is working on its own insect robot by turning live dragonflies into controllable undetected drones. The DragonflEye Project is a deviation from the technique developed by Laing, as it uses light to steer neurons instead of electrical nerve stimulation. According to Jesse Wheeler, the project lead for Draper, he says that this methodology is like “a joystick that tells the system how to coordinate flight activities.” Through Wheeler’s “joystick” he is able to control and steer the wings inflight and program coordinates to the bug for mission directions via his own attached micro backpack that includes a guidance system, solar energy cells, navigation cells, and optical stimulation.

Draper believes that swarms of digitally enhanced insects might hold the key to national defense as locusts and bees have been programmed to identify scents, such as chemical explosives. The critters could be eventually programmed to collect and analyze samples for homeland security, in addition to obvious surveillance opportunities. Liang boasts that her cyborg roaches are “more versatile and flexible, and they require less control,” than traditional robots. However, Liang also reminds us that “they’re more real” as ultimately living organisms even with mechanical backpacks are not machines.

Author’s note: This topic and more will be discussed at our next RobotLabNYC event in one week on September 19th at 6pm, “Investing In Unmanned Systems,” with experts from NASA, AUVSI, and Genius NY.

3 Questions: Iyad Rahwan on the “psychological roadblocks” facing self-driving cars

An image of some connected autonomous cars

by Peter Dizikes

This summer, a survey released by the American Automobile Association showed that 78 percent of Americans feared riding in a self-driving car, with just 19 percent trusting the technology. What might it take to alter public opinion on the issue? Iyad Rahwan, the AT&T Career Development Professor in the MIT Media Lab, has studied the issue at length, and, along with Jean-Francois Bonnefon of the Toulouse School of Economics and Azim Shariff of the University of California at Irvine, has authored a new commentary on the subject, titled, “Psychological roadblocks to the adoption of self-driving vehicles,” published today in Nature Human Behavior. Rahwan spoke to MIT News about the hurdles automakers face if they want greater public buy-in for autonomous vehicles.  

Q: Your new paper states that when it comes to autonomous vehicles, trust “will determine how widely they are adopted by consumers, and how tolerated they are by everyone else.” Why is this?

A: It’s a new kind of agent in the world. We’ve always built tools and had to trust that technology will function in the way it was intended. We’ve had to trust that the materials are reliable and don’t have health hazards, and that there are consumer protection entities that promote the interests of consumers. But these are passive products that we choose to use. For the first time in history we are building objects that are proactive and have autonomy and are even adaptive. They are learning behaviors that may be different from the ones they were originally programmed for. We don’t really know how to get people to trust such entities, because humans don’t have mental models of what these entities are, what they’re capable of, how they learn.

Before we can trust machines like autonomous vehicles, we have a number of challenges. The first is technical: the challenge of building an AI [artificial intelligence] system that can drive a car. The second is legal and regulatory: Who is liable for different kinds of faults? A third class of challenges is psychological. Unless people are comfortable putting their lives in the hands of AI, then none of this will matter. People won’t buy the product, the economics won’t work, and that’s the end of the story. What we’re trying to highlight in this paper is that these psychological challenges have to be taken seriously, even if [people] are irrational in the way they assess risk, even if the technology is safe and the legal framework is reliable.

Q: What are the specific psychological issues people have with autonomous vehicles?

A: We classify three psychological challenges that we think are fairly big. One of them is dilemmas: A lot of people are concerned about how autonomous vehicles will resolve ethical dilemmas. How will they decide, for example, whether to prioritize safety for the passenger or safety for pedestrians? Should this influence the way in which the car makes a decision about relative risk? And what we’re finding is that people have an idea about how to solve this dilemma: The car should just minimize harm. But the problem is that people are not willing to buy such cars, because they want to buy cars that will always prioritize themselves.

A second one is that people don’t always reason about risk in an unbiased way. People may overplay the risk of dying in a car crash caused by an autonomous vehicle even if autonomous vehicles are, on the average, safer. We’ve seen this kind of overreaction in other fields. Many people are afraid of flying even though you’re incredibly less likely to die from a plane crash than a car crash. So people don’t always reason about risk.

The third class of psychological challenges is this idea that we don’t always have transparency about what the car is thinking and why it’s doing what it’s doing. The carmaker has better knowledge of what the car thinks and how it behaves … which makes it more difficult for people to predict the behavior of autonomous vehicles, which can also dimish trust. One of the preconditions of trust is predictability: If I can trust that you will behave in a particular way, I can behave according to that expectation.

Q: In the paper you state that autonomous vehicles are better depicted “as being perfected, not as perfect.” In essence, is that your advice to the auto industry?

A: Yes, I think setting up very high expectations can be a recipe for disaster, because if you overpromise and underdeliver, you get in trouble. That is not to say that we should underpromise. We should just be a bit realistic about what we promise. If the promise is an improvement on the current status quo, that is, a reduction in risk to everyone, both pedestrians as well as passengers in cars, that’s an admirable goal. Even if we achieve it in a small way, that’s already progress that we should take seriously. I think being transparent about that, and being transparent about the progress being made toward that goal, is crucial.

Talking Machines: Machine Learning in the Field and Bayesian Baked Goods, with Ernest Mwebaze

In episode eight of season three we return to the epic (or maybe not so epic) clash between frequentists and bayesians, take a listener question about the ethical questions generators of machine learning should be asking of themselves (not just their tools) and we hear a conversation with Ernest Mwebaze of Makerere University.

If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

IBM and MIT to pursue joint research in artificial intelligence, establish new MIT–IBM Watson AI Lab

MIT President L. Rafael Reif, left, and John Kelly III, IBM senior vice president, Cognitive Solutions and Research, shake hands at the conclusion of a signing ceremony establishing the new MIT–IBM Watson AI Lab. Credit: Jake Belcher

IBM and MIT today announced that IBM plans to make a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT. The lab will carry out fundamental artificial intelligence (AI) research and seek to propel scientific breakthroughs that unlock the potential of AI. The collaboration aims to advance AI hardware, software, and algorithms related to deep learning and other areas; increase AI’s impact on industries, such as health care and cybersecurity; and explore the economic and ethical implications of AI on society. IBM’s $240 million investment in the lab will support research by IBM and MIT scientists.

The new lab will be one of the largest long-term university-industry AI collaborations to date, mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab in Cambridge, Massachusetts — co-located with the IBM Watson Health and IBM Security headquarters in Kendall Square — and on the neighboring MIT campus.

The lab will be co-chaired by Dario Gil, IBM Research VP of AI and IBM Q, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. (Read a related Q&A with Chandrakasan.) IBM and MIT plan to issue a call for proposals to MIT researchers and IBM scientists to submit their ideas for joint research to push the boundaries in AI science and technology in several areas, including:

  • AI algorithms: Developing advanced algorithms to expand capabilities in machine learning and reasoning. Researchers will create AI systems that move beyond specialized tasks to tackle more complex problems and benefit from robust, continuous learning. Researchers will invent new algorithms that can not only leverage big data when available, but also learn from limited data to augment human intelligence.
  • Physics of AI: Investigating new AI hardware materials, devices, and architectures that will support future analog computational approaches to AI model training and deployment, as well as the intersection of quantum computing and machine learning. The latter involves using AI to help characterize and improve quantum devices, and researching the use of quantum computing to optimize and speed up machine-learning algorithms and other AI applications.
  • Application of AI to industries: Given its location in IBM Watson Health and IBM Security headquarters in Kendall Square, a global hub of biomedical innovation, the lab will develop new applications of AI for professional use, including fields such as health care and cybersecurity. The collaboration will explore the use of AI in areas such as the security and privacy of medical data, personalization of health care, image analysis, and the optimum treatment paths for specific patients.
  • Advancing shared prosperity through AI: The MIT–IBM Watson AI Lab will explore how AI can deliver economic and societal benefits to a broader range of people, nations, and enterprises. The lab will study the economic implications of AI and investigate how AI can improve prosperity and help individuals achieve more in their lives.

In addition to IBM’s plan to produce innovations that advance the frontiers of AI, a distinct objective of the new lab is to encourage MIT faculty and students to launch companies that will focus on commercializing AI inventions and technologies that are developed at the lab. The lab’s scientists also will publish their work, contribute to the release of open source material, and foster an adherence to the ethical application of AI.

“The field of artificial intelligence has experienced incredible growth and progress over the past decade. Yet today’s AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives,” says John Kelly III, IBM senior vice president, Cognitive Solutions and Research. “The extremely broad and deep technical capabilities and talent at MIT and IBM are unmatched, and will lead the field of AI for at least the next decade.”

“I am delighted by this new collaboration,” MIT President L. Rafael Reif says. “True breakthroughs are often the result of fresh thinking inspired by new kinds of research teams. The combined MIT and IBM talent dedicated to this new effort will bring formidable power to a field with staggering potential to advance knowledge and help solve important challenges.”

Both MIT and IBM have been pioneers in artificial intelligence research, and the new AI lab builds on a decades-long research relationship between the two. In 2016, IBM Research announced a multiyear collaboration with MIT’s Department of Brain and Cognitive Sciences to advance the scientific field of machine vision, a core aspect of artificial intelligence. The collaboration has brought together leading brain, cognitive, and computer scientists to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision. In addition, IBM and the Broad Institute of MIT and Harvard have established a five-year, $50 million research collaboration on AI and genomics.

MIT researchers were among those who helped coin and popularize the very phrase “artificial intelligence” in the 1950s. MIT pushed several major advances in the subsequent decades, from neural networks to data encryption to quantum computing to crowdsourcing. Marvin Minsky, a founder of the discipline, collaborated on building the first artificial neural network and he, along with Seymour Papert, advanced learning algorithms. Currently, the Computer Science and Artificial Intelligence Laboratory, the Media Lab, the Department of Brain and Cognitive Sciences, and the MIT Institute for Data, Systems, and Society serve as connected hubs for AI and related research at MIT.

For more than 20 years, IBM has explored the application of AI across many areas and industries. IBM researchers invented and built Watson, which is a cloud-based AI platform being used by businesses, developers, and universities to fight cancer, improve classroom learning, minimize pollution, enhance agriculture and oil and gas exploration, better manage financial investments, and much more. Today, IBM scientists across the globe are working on fundamental advances in AI algorithms, science and technology that will pave the way for the next generation of artificially intelligent systems.

For information about employment opportunities with IBM at the new AI Lab, please visit MITIBMWatsonAILab.mit.edu.

United Technologies acquires Rockwell Collins for $30 billion

Aerospace conglomerate United Technologies is paying $30 billion to acquire Rockwell Collins in a deal that creates one of the world’s largest makers of civilian and defense aircraft components. Rockwell Collins and United’s Aerospace Systems segment will combine to create a new business unit named Collins Aerospace Systems.

United Technologies will pay $140 per share for Rockwell Collins shares; $93.33 in cash and $46.67 in stock. The $140 price represents a 17.6% premium for Rockwell shareholders.

“This acquisition adds tremendous capabilities to our aerospace businesses and strengthens our complementary offerings of technologically advanced aerospace systems,” said UTC’s chairman and CEO, Greg Hayes.

Both companies have subsidiaries involved in robotics, drones and marine systems but both derive most of their revenue from civilian and defense aerospace.

  • United Technologies includes Otis elevators, escalators and moving walkways; Pratt & Whitney designs and manufactures military and commercial engines, power units and turbojet products; Carrier heating, air-conditioning and refrigeration products; Chubb security and fire-safety solutions; Kidde smoke alarms and fire safety technology; and UTC aerospace systems which provide aircraft interiors, space and ISR systems, landing gear and sensors and sensor-based systems for everything from ice detection to guidance and navigation. Their Aerospace Systems unit has a wide range of products for multiple unmanned platforms including unmanned underwater vehicles (UUVs).
  • Rockwell Collins (not to be confused with (or involved in this acquisition) Rockwell Automation* which is highly involved in robotics) designs and produces electronic communications, avionics and in-flight entertainment systems for commercial, military and government customers and includes navigation and display systems for unmanned commercial and military vehicles. Their electronics are installed in nearly every airline cockpit in the world. Their helmet mounted display systems and in-car head-up displays are also big revenue producers.

According to Reuters, “The deal also follows a wave of consolidation among smaller aerospace manufacturers in recent years that was caused in part by the need to invest in new technologies such as metal 3-D printing and connected factories to stay competitive. A combined United Technologies and Rockwell Collins could similarly invest, and their broad portfolios have little overlap.”

________________

Page 383 of 397
1 381 382 383 384 385 397