News

Page 323 of 341
1 321 322 323 324 325 341

“Superhero” robot wears different outfits for different tasks

Dubbed “Primer,” a new cube-shaped robot can be controlled via magnets to make it walk, roll, sail, and glide. It carries out these actions by wearing different exoskeletons, which start out as sheets of plastic that fold into specific shapes when heated. After Primer finishes its task, it can shed its “skin” by immersing itself in water, which dissolves the exoskeleton. Credit: the researchers.

From butterflies that sprout wings to hermit crabs that switch their shells, many animals must adapt their exterior features in order to survive. While humans don’t undergo that kind of metamorphosis, we often try to create functional objects that are similarly adaptive — including our robots.

Despite what you might have seen in “Transformers” movies, though, today’s robots are still pretty inflexible. Each of their parts usually has a fixed structure and a single defined purpose, making it difficult for them to perform a wide variety of actions.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are aiming to change that with a new shape-shifting robot that’s something of a superhero: It can transform itself with different “outfits” that allow it to perform different tasks.

Dubbed “Primer,” the cube-shaped robot can be controlled via magnets to make it walk, roll, sail, and glide. It carries out these actions by wearing different exoskeletons, which start out as sheets of plastic that fold into specific shapes when heated. After Primer finishes its task, it can shed its “skin” by immersing itself in water, which dissolves the exoskeleton.

“If we want robots to help us do things, it’s not very efficient to have a different one for each task,” says Daniela Rus, CSAIL director and principal investigator on the project. “With this metamorphosis-inspired approach, we can extend the capabilities of a single robot by giving it different ‘accessories’ to use in different situations.”

Primer’s various forms have a range of advantages. For example, “Wheel-bot” has wheels that allow it to move twice as fast as “Walk-bot.” “Boat-bot” can float on water and carry nearly twice its weight. “Glider-bot” can soar across longer distances, which could be useful for deploying robots or switching environments.

Primer can even wear multiple outfits at once, like a Russian nesting doll. It can add one exoskeleton to become “Walk-bot,” and then interface with another, larger exoskeleton that allows it to carry objects and move two body lengths per second. To deploy the second exoskeleton, “Walk-bot” steps onto the sheet, which then blankets the bot with its four self-folding arms.

“Imagine future applications for space exploration, where you could send a single robot with a stack of exoskeletons to Mars,” says postdoc Shuguang Li, one of the co-authors of the study. “The robot could then perform different tasks by wearing different ‘outfits.’”

The project was led by Rus and Shuhei Miyashita, a former CSAIL postdoc who is now director of the Microrobotics Group at the University of York. Their co-authors include Li and graduate student Steven Guitron. An article about the work appears in the journal Science Robotics on Sept. 27.

Robot metamorphosis

Primer builds on several previous projects from Rus’ team, including magnetic blocks that can assemble themselves into different shapes and centimeter-long microrobots that can be precisely customized from sheets of plastic.

While robots that can change their form or function have been developed at larger sizes, it’s generally been difficult to build such structures at much smaller scales.

“This work represents an advance over the authors’ previous work in that they have now demonstrated a scheme that allows for the creation of five different functionalities,” says Eric Diller, a microrobotics expert and assistant professor of mechanical engineering at the University of Toronto, who was not involved in the paper. “Previous work at most shifted between only two functionalities, such as ‘open’ or ‘closed’ shapes.”

The team outlines many potential applications for robots that can perform multiple actions with just a quick costume change. For example, say some equipment needs to be moved across a stream. A single robot with multiple exoskeletons could potentially sail across the stream and then carry objects on the other side.

“Our approach shows that origami-inspired manufacturing allows us to have robotic components that are versatile, accessible, and reusable,” says Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT.

Designed in a matter of hours, the exoskeletons fold into shape after being heated for just a few seconds, suggesting a new approach to rapid fabrication of robots.

“I could envision devices like these being used in ‘microfactories’ where prefabricated parts and tools would enable a single microrobot to do many complex tasks on demand,” Diller says.

As a next step, the team plans to explore giving the robots an even wider range of capabilities, from driving through water and burrowing in sand to camouflaging their color. Guitron pictures a future robotics community that shares open-source designs for parts much the way 3-D-printing enthusiasts trade ideas on sites such as Thingiverse.

“I can imagine one day being able to customize robots with different arms and appendages,” says Rus. “Why update a whole robot when you can just update one part of it?”

This project was supported, in part, by the National Science Foundation.

#ERLEmergency2017 in tweets


The ERL Emergency Robots 2017 (#ERLemergency2017) major tournament in Piombino, Italy, gathered 130 participants from 16 universities and companies from 8 European countries. Participating teams designed robots able to bring the first relief to survivors in disaster-response scenarios. The #ERLemergency2017 scenarios were inspired by the Fukushima 2011 nuclear accident. The robotics competition took place from 15-23 September 2017 at Enel’s Torre del Sale, and saw sea, land and air robots collaborating.

Teams worked very hard during the practice and competition days:

Robots could be found also in the exhibition area:

Or enjoying the sea:

Robotics experts held presentations and demos for the general public during the Opening Ceremony in Piazza Bovio.

The public got to know the teams,

And to see some emergency robots in action with the demo of TRADR project:

The competition site benefitted from the visit of some personalities:

Since they are the new generation of roboticists, children were not forgotten either: they enjoyed the free classes given by Scuola di Robotica.

At the Piombino Castle, the public attended more #robotics presentations.

After days of hard work, passion and enjoyment, the winners of the Grand challenge were announced:

Drones can almost see in the dark

We want to share with you our recent breakthrough teaching drones to fly using an eye-inspired camera, which opens the door to them performing fast, agile maneuvers and flying in low-light environments, where all commercial drones fail. Possible applications include supporting rescue teams with search missions at dusk or dawn. We submitted this work to the IEEE Robotics and Automation Letters.

How it works
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions or difficult lighting such as high dynamic range or low light scenes. In this work, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly-coupled manner events, standard frames, and inertial measurements. We show that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate—to the best of our knowledge—the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual inertial odometry, such as low-light environments and high dynamic range scenes: we demonstrate how we can even fly in low light (such as after switching completely off the light in a room) or scenes characterized by a very high dynamic range (one side of the room highly illuminated and another side of the room dark).

Paper:
T. Rosinol Vidal, H.Rebecq, T. Horstschaefer, D. Scaramuzza
Hybrid, Frame and Event based Visual Inertial Odometry for Robust, Autonomous Navigation of Quadrotors, submitted to IEEE Robotics and Automation Letters PDF

#IROS2017 Live Coverage

The 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) is being held in Vancouver Canada this week. The theme of IROS 2017 is “Friendly People, Friendly Robots”. Robots and humans are becoming increasingly integrated in various application domains. We work together in factories, hospitals and households, and share the road. This collaborative partnership of humans and robots gives rise to new technological challenges and significant research opportunities in developing friendly robots that can work effectively with, for, and around people.

And it’s also IROS’s 30th birthday this year! The occasion for much celebration.

Many Robohubbers will be at IROS, watch out for Sabine, Audrow, Andra, Hallie, and AJung. We’re also looking for new members to join our community. If you’re interested, email sabine.hauert@robohub.org, and we’ll make sure to meet during the conference!


Udacity Robotics video series: Interview with Chris Anderson from 3D Robotics


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Chris Anderson, Co-Founder and CEO of 3D Robotics. Chris is a former Wired magazine editor turned robotics company co-founder and CEO. Learn about Chris’s amazing journey into the world of unmanned aerial vehicles.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

Talking Machines: The long view and learning in person, with John Quinn


In episode nine of season three we chat about the difference between models and algorithms, take a listener question about summer schools and learning in person as opposed to learning digitally, and we chat with John Quinn of the United Nations Global Pulse lab in Kampala, Uganda and Makerere University’s Artificial Intelligence Research group.

If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Robotics, the traditional path and new approaches

The hype cycle representation of the robotics field based on the general interest since its inception obtained from a joint review of publications, conferences, events and solutions. From “Dissecting Robotics — historical overview and future perspectives”.

Robotics, like many other technologies, suffered from an inflated set of expectations resulting in a decrease of the developments and results during the 90s. Over the last years, several groups thought that flying robots, commonly known as drones, would address these limitations however it seems unlikely that the popularity of these flying machines will drive and push the robot growth as expected. This article aims to summarize traditional techniques used to build and program robots together with new trends that aim to simplify and enhance the progress in the field.

Building robots

It’s a rather popular thought that building a robot and programing its behavior remain two highly complicated tasks. Recent advances in adopting ROS as a standardized software framework for developing robot applications helped with the latter, however building a robot remains a challenge. The lack of compatible systems in terms of hardware, the non existing marketplace of reusable modules, or the expertise required to develop the most basic behaviors are some of the few listed hurdles.

The integration-oriented approach
Robots are typically built by following the step-by-step process described below:

1. Buy parts: We typically decide on what components our robot will need. A mobile base, a range finder, a processing device, etc. Once decided we fetch those that match our requirements and proceed towards integration.

2. Integration: Making different components speak to each other and cooperate towards achieving the end goal of the robot. Surprisingly, that’s where most of our time is spent.

3. “build the robot”: Assembling all of the components into joints and mechanically linking them. This step might also get executed together with step

4. Programming the robot: Making the robot do what it’s meant to do.

5. Test & adapt: Robots are typically programmed in predictable scenarios. Testing the pre-programmed behavior in real scenarios is always critical. Generally, these tests deliver results that indicate where adaptations are needed, which in many cases pushes de process of building a robot down to step 2 again, integration.

6. Deploy: Ship it!

The “integration-oriented” approach for building a robot.

It’s well understood that building a robot is a technically challenging task. Engineers often face situations where the integration effort of the robot, generally composed by diverse sub-components, supersedes many other tasks. Furthermore, every hardware modification/adaptation while programming or building the robot demands further integration.

This method for building robots produces results that become obsolete within a short period.

Moreover, modules within the robots aren’t reusable in most of the cases since the integration effort makes reusability an incredibly expensive (manpower-wise) and time-consuming task.

The modular approach

The existing trend in robotics is producing a significant number of hardware devices. Although there’s an existing trend towards using the Robot Operating System (ROS), when compared to each other, these components typically consist of incompatible electronic components with different software interfaces.

Now, imagine building robots by connecting interoperable modules together. Actuator, sensors, communication modules, UI devices, provided everything interoperates together, the whole integration effort could be eliminated. The overall process of building robots could be simplified and the development effort and time will be reduced significantly.

Comparison between the “integration-oriented” and the “modular” approaches for building robots.

Modular components could be reused among robots and that’s exactly what we’re working on with H-ROS, the Hardware Robot Operating System.
H-ROS is a vendor-agnostic infrastructure for the creation of robot modules that interoperate and can be exchanged between robots. H-ROS builds on top of ROS, the Robot Operating System, which is used to define a set of standardized logical interfaces that each physical robot component must meet if compliant with H-ROS.

Programming robots

The robotics control pipeline

Traditionally, the process of programming a robot for a given task T is described as follows:
Observation: Robot’s sensors produce measurements. All these measurements receive the name of “observations” and are the inputs that the robot receives to execute task T.

State estimation: Given the observations of step 1, we describe the robot’s motion over time by inferring a set of characteristics of the robot such as its position, its orientation or its velocity. Obviously, mistakes in the observations will lead to errors in the state estimation.

Modeling & Prediction: Determine the dynamics of the robot (rules for how to move it around) using a) the robot model (typically the URDF of the robot in the ROS world) and b) the state estimation. Similarly to what happened with the previous step, errors in “state estimation” will impact the results obtained in this step.

Planning: this step determines the actions required to execute task T and uses both the state estimation and the dynamical model from previous steps in the pipeline.

Low level control: the final step in the pipeline consists of transforming the “plan” into low level control commands that steer the robot actuators.

The traditional “robotics control pipeline”

Bio-inspired techniques

Artificial Intelligence methods and, particularly, bio-inspired techniques such as artificial neural networks (ANNs) are becoming more and more relevant in robotics. Starting from 2009, ANNs gained popularity and started delivering good results in the fields of computer vision (2012) or machine translation (2014). Nowadays, these fields are completely filled by techniques that simulate the neural/synaptic activity of the brain of a living organism.

During the last years we have seen how these techniques have been translated to robotics for tasks such as robotic grasping (2016). Our team has been putting resources into exploring these techniques
that enable to train a robotic device in a manner conceptually similar to the mode in which one goes about training a domesticated animal such as a dog or cat.

Training robots end-to-end for a given task. This integrated and bio-inspired approach conflicts with the traditional robotics pipeline, however it’s already showing promising results of behaviors that generalize.
We are excited to share that it’s within our expectations to see more active use of these bio-inspired techniques. We are confident that its use will drive innovations with high impact for robotics and we hope to contribute by opening part of our work and results.

Programming robots versus training robots. This image pictures the traditional robotics approach named as the “robotics control pipeline” and the new “bio-inspired” approach that makes use of AI techniques that simulate the neural/synaptic activity of the brain.

The roboticist matrix

All these new approaches for both building and programming robots bring a dilemma to roboticists. What should they focus on? Which approach should they follow for each particular use case? Let’s analyze the different combinations:

The roboticist matrix presents a comparison between traditional and new approaches for building and programming robots.

Integration-oriented + robotics control pipeline:
This combination represents the “traditional approach” in all senses. It’s the process that most robot solutions use nowadays in industry. Integrated robots that typically belong to a single manufacturer. Such robots are programmed in a structured way to execute a well defined task. Typically achieving high levels of accuracy and repeatability. However, any uncertainty in the environment will typically drive the robot to fail on its task. Expenses related to develop such systems are typically in the range of 10.000–100.000 € for the simplest behaviors and an order of magnitud above for the more complex tasks.

Integration-oriented + bio-inspired:
Behaviors that evolve, but with strong hardware constraints and limitations. Traditional robots enhanced with bio-inspired approaches. Robots using this combination will be able to learn by themselves and adapt to changes in the environment however any modification, repurpose or extension within the robot hardware will require big integration efforts. The expenses for developing these robots are similar to the ones presented for the “traditional approach”.

Modular + robotics control pipeline:
Flexible hardware with structured behaviors. These robots will be built in a modular way. Building, repairing and/or repurposing these robots will be extremely affordable when compared to traditional robots (built with the integration-oriented approach), we estimate an order of magnitude less (1.000–10.000 €). Furthermore, modularity will introduce new opportunities for these robots.

Modular + bio-inspired:
This combination represents the most innovative one and has the potential to disrupt the whole robotics market changing both the way we build and program/train robots. Yet it’s also the most immature one.
Similar to the previous approach group, our team foresees that the expenses for putting together these robots can also be reduced when compared to the more traditional approaches. We estimate that building and training these robots should range in terms of expenses from 1.000 to 10.000 € for simple scenarios and up to 50.000 € for the more elaborated ones.

Our path towards the future: modular robots and H-ROS

A modular robot built using H-ROS compatible components.

The team behind Erle Robotics is proud to announce that together with Acutronic Robotics (Switzerland), Sony (Japan) is now also pushing the development of H-ROS, the Hardware Robot Operating System. A technology that aims to change the landscape of robotics by creating an ecosystem where hardware components can be reused among different robots, regardless of the original manufacturer. Our team strongly believe that the future of robotics will be about modular robots that can be easily repaired and reconfigured. H-ROS aims to shape this future. Sony’s leadership and vision in robotics is widely recognized in the community. We are confident that, with the addition of Sony as a supporter, our present innovations will spread even more rapidly.

Our team is focused in exploring these new opportunities and will introduce some results this week in Vancouver during ROSCon. Show your interest and join us in Canada!

The importance of research reproducibility in robotics

As highlighted in a previous post, despite the fact that robotics is increasingly regarded as a ‘Science’, as shown by the launch of new journals such as Science Robotics, reproducibility of experiments is still difficult or entirely lacking.

This is quite unfortunate as the possibility of reproducing experimental results is a cornerstone of the scientific method. This situation pushes serious discussions (What’s ‘soft robotics’? Is it needed? What has to be ’soft’?) and paradigm clashes (Good Old Fashioned Artificial Intelligence vs. Deep Learning vs. Embodied Cognition) into the realm of literary controversy or even worse religious territory fights, with quite little experimental evidence supporting the claims of the different parties. Not even wrong, as they say (following Peter Woit’s arguments on String Theory)?

The robotics community has been aware of these issues for a long time and more and more researchers in recent years have published datasets, code and other valuable information to allow others to reproduce their results. We are heading in the right direction, but we probably need to do more.

I think we should therefore welcome the fact that for the first time ever, IEEE R&A Mag. will start accepting R-articles (i.e., papers that report experiments aiming to be fully reproducible) beginning this September. Actually, they will also accept short articles reporting on the replication of R-Article results, and author replies are solicited and will be published after peer-review. The result will be a two-stage high-quality review process. The first stage will be the ordinary rigorous review process of a top-tier publishing venue. The second stage will be the replication of the experiments by the community (which is the core of the scientific method).

This seems like a historical improvement, doesn’t it?

There is more information on this in the column I wrote in the September issue of IEEE Robotics and Automation.

Automatic code reuse

“CodeCarbonCopy enables one of the holy grails of software engineering: automatic code reuse,” says Stelios Sidiroglou-Douskos, a research scientist at CSAIL. Credit: MIT News

by Larry Hardesty

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system that allows programmers to transplant code from one program into another. The programmer can select the code from one program and an insertion point in a second program, and the system will automatically make modifications necessary — such as changing variable names — to integrate the code into its new context.

Crucially, the system is able to translate between “data representations” used by the donor and recipient programs. An image-processing program, for instance, needs to be able to handle files in a range of formats, such as jpeg, tiff, or png. But internally, it will represent all such images using a single standardized scheme. Different programs, however, may use different internal schemes. The CSAIL researchers’ system automatically maps the donor program’s scheme onto that of the recipient, to import code seamlessly.

The researchers presented the new system, dubbed CodeCarbonCopy, at the Association for Computing Machinery’s Symposium on the Foundations of Software Engineering.

“CodeCarbonCopy enables one of the holy grails of software engineering: automatic code reuse,” says Stelios Sidiroglou-Douskos, a research scientist at CSAIL and first author on the paper. “It’s another step toward automating the human away from the development cycle. Our view is that perhaps we have written most of the software that we’ll ever need — we now just need to reuse it.”

The researchers conducted eight experiments in which they used CodeCarbonCopy to transplant code between six popular open-source image-processing programs. Seven of the eight transplants were successful, with the recipient program properly executing the new functionality.

Joining Sidiroglou-Douskos on the paper are Martin Rinard, a professor of electrical engineering and computer science; Fan Long, an MIT graduate student in electrical engineering and computer science; and Eric Lahtinen and Anthony Eden, who were contract programmers at MIT when the work was done.

Mutatis mutandis

With CodeCarbonCopy, the first step in transplanting code from one program to another is to feed both of them the same input file. The system then compares how the two programs process the file.

If, for instance, the donor program performs a series of operations on a particular piece of data and loads the result into a variable named “mem_clip->width,” and the recipient performs the same operations on the same piece of data and loads the result into a variable named “picture.width,” the system will infer that the variables are playing the same roles in their respective programs.

Once it has identified correspondences between variables, CodeCarbonCopy presents them to the user. It also presents all the variables in the donor for which it could not find matches in the recipient, together with those variables’ initial definitions. Frequently, those variables are playing some role in the donor that’s irrelevant to the recipient. The user can flag those variables as unnecessary, and CodeCarbonCopy will automatically excise any operations that make use of them from the transplanted code.

New order

To map the data representations from one program onto those of the other, CodeCarbonCopy looks at the precise values that both programs store in memory. Every pixel in a digital image, for instance, is governed by three color values: red, green, and blue. Some programs, however, store those triplets of values in the order red, green, blue, and others store them in the order blue, green, red.

If CodeCarbonCopy finds a systematic relationship between the values stored by one program and those stored by the other, it generates a set of operations for translating between representations.

CodeCarbonCopy works well with file formats, such as images, whose data is rigidly organized, and with programs, such as image processors, that store data representations in arrays, which are essentially rows of identically sized memory units. In ongoing work, the researchers are looking to generalize their approach to file formats that permit more flexible data organization and programs that use data structures other than arrays, such as trees or linked lists.

“In general, code quoting is where a lot of problems in software come from,” says Vitaly Shmatikov, a professor of computer science at Cornell Tech, a joint academic venture between Cornell University and Israel’s Technion. “Both bugs and security vulnerabilities — a lot of them occur when there is functionality in one place, and someone tries to either cut and paste or reimplement this functionality in another place. They make a small mistake, and that’s how things break. So having an automated way of moving code from one place to another would be a huge, huge deal, and this is a very solid step toward having it.”

“Recognizing irrelevant code that’s not important for the functionality that they’re quoting, that’s another technical innovation that’s important,” Shmatikov adds. “That’s the kind of thing that was an obstacle for a lot of previous approaches — that you know the right code is there, but it’s mixed up with a lot of code that is not relevant to what you’re trying to do. So being able to separate that out is a fairly significant technical contribution.”

Descartes revisited: Do robots think?

This past week, a robotic first happened: ABB’s Yumi robot conducted the Lucca Philharmonic Orchestra in Pisa, Italy. The dual-armed robot overshadowed even his vocal collaborator, Italian tenor Andrea Bocelli. While many will try to hype the performance as ushering in a new new era of mechanical musicians, Yumi’s artistic career was short-lived as it was part of the opening ceremonies of Italy’s First International Festival of Robotics.

Italian conductor Andrea Colombini said of his student, “The gestural nuances of a conductor have been fully reproduced at a level that was previously unthinkable to me. This is an incredible step forward, given the rigidity of gestures by previous robots. I imagine the robot could serve as an aid, perhaps to execute, in the absence of a conductor, the first rehearsal, before the director steps in to make the adjustments that result in the material and artistic interpretation of a work of music.”

Harold Cohen with his robot AARON

Yumi is not the first computer artist. In 1973, professor and artist, Harold Cohen created a software program called AARON – a mechanical painter. AARON’s works have been exhibited worldwide, including at the prestigious Venetian Biennale. Following Cohen’s lead, Dr Simon Colton of London’s Imperial College created “The Painting Fool,” with works on display in Paris’ prestigious Galerie Oberkampf in 2013. Colton wanted to test if he could cross the emotional threshold with an artistic Turning Test. Colton explained, “I realized that the Painting Fool was a very good mechanism for testing out all sorts of theories, such as what it means for software to be creative. The aim of the project is for the software itself to be taken seriously as a creative artist in its own right, one day.”

In June 2015, Google’s Brain AI research team took artistic theory to the next level by infusing its software with the ability to create a remarkably human-like quality of imagination. To do this, Google’s programmers took a cue from one of the most famous masters of all time, Leonardo da Vinci. Da Vinci suggested that aspiring artists should start by looking at stains or marks on walls to create visual fantasies. Google’s neural net did just that, translating the layers of the image into spots and blotches with new stylized painterly features (see examples below).

1) Google uploaded a photograph of a standard Southwestern scene:

2) The computer then translated the layers as below:

In describing his creation, Google Brain senior scientist Douglas Eck said this past March, “I don’t think that machines themselves just making art for art’s sake is as interesting as you might think. The question to ask is, can machines help us make a new kind of art?” The goal of Eck’s platform called Magenta is to enable laypeople (without talent) to design new kinds of music and art, similar to synthetic keyboards, drums and camera filters. Dr. Eck himself is an admittedly frustrated failed musician who hopes that Magenta will revolutionize the arts in the same way as the electric guitar. “The fun is in finding new ways to break it and extend it,” Eck said excitedly.

The artistic development and growth of these computer programs is remarkable. Cohen, who passed away last year, said in a 2010 lecture regarding AARON “with no further input from me, it can generate unlimited numbers of images, it’s a much better colorist than I ever was myself, and it typically does it all while I’m tucked up in bed.” Feeling proud, he later corrected himself, “Well, of course, I wrote the program. It isn’t quite right to say that the program simply follows the rules I gave it. The program is the rules.”

In reflecting on the societal implications of creative bots, one can not help to be reminded of the famous statement by philosopher René Decartes: “I think, therefore I am.” Challenging this idea for the robotic age, Professor Arai Noriko tested the thinking capabilities of robots. Noriko led a research team in 2011 at Japan’s National Institute of Informatics to build an artificial intelligence program smart enough to pass the  rigorous entrance exam of the University of Tokyo.

“Passing the exam is not really an important research issue, but setting a concrete goal is useful. We can compare the current state-of-the-art AI technology with 18-year-old students,” explained Dr. Noriko. The original goal set out by Noriko’steam was for the Todai robot (named for the University) to be admitted to college by 2021. At a Ted conference earlier this year, Noriko shocked the audience by revealing the news that Todai beat 80% of the students taking the exam, which consisted of seven sections, including math, English, science, and even a 600-word essay. Rather than celebrating, Noriko shared with the crowd her fear,”I was alarmed.”

Todai is able to search and process an immense amount of data, but unlike humans it does not read, even with 15 billion sentences already in its neural network. Noriko reminds us that “humans excel at pattern recognition, creative projects, and problem solving. We can read and understand.” However, she is deeply concerned that modern educational systems are more focused on facts and figures than creative reasoning, especially because humans could never compete with the fact-checking of an AI. Noriko pointed to the entrance exam as an example, the Todai robot failed to grasp a multiple choice question that would have been obvious even to young children. She tested her thesis at a local middle school, and was dumfounded when one-third of students couldn’t even “answer a simple reading comprehension question.” She concluded that in order for humans to compete with robots, “We have to think about a new type of education. ”

Cohen also wrestled with the question of a thinking robot and whether his computer program could ever have the emotional impact of a human artist like Monet or Picasso. In his words, to reach that kind of level a machine would have to “develop a sense of self.” Cohen professed that “if it doesn’t, it means that machines will never be creative in the same sense that humans are creative.” Cohen later qualified his remarks about robotic creativity, adding, “it doesn’t mean that machines have no part to play with respect to creativity.”

Noriko is much more to the point, “How we humans will coexist with AI is something we have to think about carefully, based on solid evidence. At the same time, we have to think in a hurry because time is running out.” John Cryan, CEO of Deutsche Bank, echoed Noriko’s sentiment at a banking conference last week. Cryan said “In our banks we have people behaving like robots doing mechanical things, tomorrow we’re going to have robots behaving like people. We have to find new ways of employing people and maybe people need to find new ways of spending their time.”

iRobot on the defensive

SharkNinja, a well-known marketer of home consumer products, has entered the American robotic vacuum market with a product that is priced to compete against iRobot’s Roomba line of floor cleaners. Their new ION Robot navigates floors and carpets and docks and recharges automatically. It sells at a very favorable price point to iRobot’s.

SharkNinja has partnered with ECOVACS, a Chinese manufacturer of many robotic products including robotic vacuums and floor cleaners, to custom manufacture the new Shark ION Robot – thus SharkNinja isn’t starting from scratch. [ECOVACS is a big seller in China. On Singles Day (11/11/2016), online via the e-commerce giant Alibaba, ECOVACS sold $60.2 million of robotic products, up from $47.6 million in 2015. The star performer was a DEEBOT robotic vacuum which sold 135,000 units. The ECOVACS window-cleaning robot was another standout product, with more than 10,000 units sold.]

iRobot’s stock took an 18% negative hit – perhaps on the news of the product launch by SharkNinja, or perhaps because some prominent analysts downgraded their ratings of the company saying that iRobot is susceptible to a lower-cost similarly capable well-regarded branded product. The SharkNinja robotic vacuums fits those criteria.

SharkNinja is a fast-growing vendor of blenders, vacuums and other household products. They displaced Dyson in vacuums by engineering a superior product at a value price point (the Dyson robot vacuum sold for $1,000). SharkNinja, by using disruptive pricing and infomercial marketing, has garnered around 20% of the U.S. market for vacuums in just 10 years. SharkNinja’s non-robotic vacuums and blenders command significant shelf space and are very popular with customers and sellers alike. Thus they are a formidable competitor.

Also this month, SharkNinja raised an undisclosed sum from CDH Investments, a private equity fund with $20 billion of assets under management. CDH said they purchased “a significant equity interest” in SharkNinja.

iRobot’s Defensive Moves

iRobot has been making defensive moves recently. It acquired its two main distributors: Robopolis in Europe and Sales on Demand in Japan. It has used up much of its cash reserve to buy back shares of the company. And it sued what it considered to be patent violations by Hoover, Black & Decker, Bobsweep, Bissell Homecare, and Micro-Star International (MSI) (which manufacturers the Hoover and Black & Decker vacuums).

According to Zacks Equity Research, iRobot just favorably settled with MSI in an agreement where MSI will exit the global robotic cleaning industry and also provide a undisclosed compensation fee to iRobot.

“This settlement represents the first successful milestone on the enforcement effort iRobot initiated earlier this year,” said Glen Weinstein, executive vice president and chief legal officer at iRobot. “The agreement by MSI to exit the robotic cleaning industry signifies the value of iRobot’s intellectual property and the company’s efforts to protect it.”

Nevertheless, iRobot may be vulnerable to an international consumer products company with a full range of consumer products who competes with similar products at lower prices.

Robots Podcast #242: Disney Robotics, with Katsu Yamane



In this episode, Audrow Nash interviews Katsu Yamane, Senior Research Scientist at Disney, about robotics in Disney. Yamane discusses Disney’s history with robots, how Disney currently uses Robots, how designing robots at Disney is different than in academia or industry, a realistic robot simulator used by Disney’s animators, and on becoming a Disney Research “Imagineer.”

Katsu Yamane

Katsu received his PhD in mechanical engineering from University of Tokyo in 2002. Following postdoctoral work at Carnegie Mellon University from 2002 to 2003, he was a faculty member at University of Tokyo until he joined Disney Research, Pittsburgh, in October 2008. His main research area is humanoid robot control and motion synthesis, in particular methods involving human motion data and dynamic balancing. He is also interested in developing algorithms for creating character animation. He has always been fascinated by the way humans control their bodies, which led him to the research on biomechanical human modeling and simulation to understand human sensation and motor control.

 

 

Links

A foldable cargo drone

The field of drone delivery is currently a big topic in robotics. However, the reason that your internet shopping doesn’t yet arrive via drone is that current flying robots can prove a safety risk to people and are difficult to transport and store.

A team from the Floreano LabNCCR Robotics and EPFL present a new type of cargo drone that is inspired by origami, is lightweight and easily manoeuvrable and uses a foldaway cage to ensure safety and transportability.

A foldable protective cage sits around a multicopter and around the package to be carried, shielding spinning propellers and ensuring safety of all people around it. When the folding cage is opened in order to either load or unload the drone, a safety mechanism ensures that the engine is cut off, meaning that safety is ensured, even with completely untrained users.


But where this drone takes a step forward is in the folding cage, ensuring that it can be easily stowed away and transported. The team took inspiration from the origami folding shelters that have been developed for space exploration and adapted them to create a chinese lantern shape, and instead of using paper, a skeletal structure is created using carbon fibre tubes and 3D printed flexible joints. The cage is opened and closed using a joint mechanism on the top and bottom and pushing apart the resulting gap – in fact, both opening and closing of the cage and be performed in just 1.2 seconds.

By adding such a cage to a multicopter, the team ensure safety for those who come into contact with the drone. The drone can be caught while it’s flying, meaning that it can deliver to people caught in places where landing is hard or even impossible, such as a collapsed building during search and rescue missions, where first aid, medication, water or food may need to be delivered quickly.

Currently, the drone is able to carry 0.5 kg cargo for 2 km, and any visitors to EPFL over this summer would have noticed it being used to transport small items across campus 150 times, but it is hoped that by scaling, it may be able to carry as much as 2 kg over 15 km, a weight and distance that would allow for longer distance deliveries.

Reference:
P.M. Kornatowski, S. Mintchev, and D. Floreano, “An origami-inspired cargo drone”, in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017.

Udacity Robotics video series: Interview with Abdelrahman Elogeel from Amazon Robotics


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Abdelrahman Elogeel. Abdelrahman is a Software Development Engineer in the Core Machine Learning team at Amazon Robotics. His work includes bringing state-of-the-art machine learning techniques to tackle various problems for robots at Amazon’s robotic fulfilment centers.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

New NHTSA Robocar regulations are a major, but positive, reversal

NHTSA released their latest draft robocar regulations just a week after the U.S. House passed a new regulatory regime and the senate started working on its own. The proposed regulations preempt state regulation of vehicle design, and allow companies to apply for high volume exemptions from the standards that exist for human-driven cars.

It’s clear that the new approach will be quite different from the Obama-era one, much more hands-off. There are not a lot of things to like about the Trump administration but this could be one of them. The prior regulations reached 116 pages with much detail, though they were mostly listed as “voluntary.” I wrote a long critique of the regulations in a 4 part series which can be found in my NHTSA tag. They seem to have paid attention to that commentary and the similar commentary of others.

At 26 pages, the new report is much more modest, and actually says very little. Indeed, I could sum it up as follows:

  • Do the stuff you’re already doing
  • Pay attention to where and when your car can drive and document that
  • Document your processes internally and for the public
  • Go to the existing standards bodies (SAE, ISO etc.) for guidance
  • Create a standard data format for your incident logs
  • Don’t forget all the work on crash avoidance, survival and post-crash safety in modern cars that we worked very hard on
  • Plans for how states and the feds will work together on regulating this

Goals vs. Approaches

The document does a better job at understanding the difference between goals — public goods that it is the government’s role to promote — and approaches to those goals, which should be entirely the province of industry.

The new document is much more explicit that the 12 “safety design elements” are voluntary. I continue to believe that there is a risk they may not be truly voluntary, as there will be great pressure to conform with them, and possible increased liability for those who don’t, but the new document tries to avoid that, and its requests are much milder.

The document understands the important realization that developers in this space will be creating new paths to safety and establishing new and different concepts of best practices. Existing standards have value, but they can at best encode conventional wisdom. Robocars will not be created using conventional wisdom. The new document takes the approach of more likely recommending that the existing standards be considered, which is a reasonable plan.

A lightweight regulatory philosophy

My own analysis is guided by a lightweight regulatory approach which has been the norm until now. The government’s role is to determine important public goals and interests, and to use regulations and enforcement when, and only when, it becomes clear that industry can’t be trusted to meet these goals on its own.

In particular, the government should very rarely regulate how something should be done, and focus instead on what needs to happen as the end result, and why. In the past, all automotive safety technologies were developed by vendors and deployed, sometimes for decades, before they were regulated. When they were regulated, it was more along the lines of “All cars should now have anti-lock brakes.” Only with the more mature technologies have the regulations had to go into detail on how to build them.

Worthwhile public goals include safety, of course, and the promotion of innovation. We want to encourage both competition and cooperation in the right places. We want to protect consumer rights and privacy. (The prior regulations proposed a mandatory sharing of incident data which is watered down greatly in these new regulations.)

I call this lightweight because others have called for a great deal more regulation. I don’t, however, view it is highly laissez-faire. Driving is already highly regulated, and the idea that regulators would need to write rules to prevent companies from doing things they have shown no evidence of doing seems odd to me. Particularly in a fast-changing field where regulators (and even developers) admit they have limited knowledge of what the technology’s final form will actually be.

Stating the obvious

While I laud the reduction of detail in these regulations, it’s worth pointing out that many of the remaining sections are stripped to the point of mostly outlining “motherhood” requirements — requirements which are obvious and that every developer has known for some time. You don’t have to say that the vehicle should follow the vehicle code and not hit other cars. Anybody who needs to be told that is not a robocar developer. The set of obvious goals belongs better in a non-governmental advice document (which this does in fact declare itself in part to be, though of course governmental) than in something considered regulatory.

Overstating the obvious and discouraging the “black box.”

Sometimes a statement can be both obvious but also possibly wrong in the light of new technology. The document has many requirements that vendors document their thinking and processes which may be very difficult to do with systems built with machine learning. Machine learning sometimes produces a “black box” that works, but there is minimal knowledge as to how it works. It may be that such systems will outperform other systems, leaving us with the dilemma of choosing between a superior system we can’t document and understand, and an inferior one we can.

There is a new research area known as “explainable AI” which hopes to bridge this gap and make it possible to document and understand why machine learning systems operate as they do. This is promising research but it may never be complete. In spite of this, EU regulations currently are already attempting to forbid unexplainable AI. This may cut off very productive avenues of development — we don’t know enough to be sure about this as yet.

Some minor notes

The name

The new report pushes a new term — Automated Driving Systems. It seems every iteration comes up with a new name. The field is really starting to need a name people agree on, since nobody seems to much like driverless cars, self-driving cars, autonomous vehicles, automated vehicles, robocars or any of the others. This one is just as unwieldy, and its acronym is an English word and thus hard to search for.

The levels

The SAE levels continue to be used. I have been critical of the levels before, recently in this satire. It is wrong to try to understand robocars primarily through the role of humans in their operation, and wrong to suggest there is a progression of levels based on that.

The 12 safety elements

As noted, most of the sections simply advise obvious policies which everybody is already doing, and advise that teams document what they are doing.

1. System Safety

This section is modest, and describes fairly common existing practices for high reliability software systems. (Almost to the point that there is no real need for the government to point them out.)

2. Operational Design Domain

The idea of defining the situations where the car can do certain things is a much better approach than imagining levels of human involvement. I would even suggest it replace the levels, and the human seen simply as one of the tools to be used to operate outside of certain domains. Still, I see minimal need for NHTSA to say this — everybody already knows that roads and their conditions are different and complex and need different classes of technology.

3. Object and Event Detection and Response, 4. Fallback, 5. Validation, 6. HMI

Again, this is fairly redundant. Vendors don’t need to be told that vehicles must obey the vehicle code and stay in their lane and not hit things. That’s already the law. They know that only with a fallback strategy can they approach the reliability needed.

7. Computer Security

While everything here is already on the minds of developers, I don’t fault the reminder here because traditional automakers have a history of having done security badly. The call for a central clearing house on attacks is good, though it should not necessarily be Auto-ISAC.

8. Occupant Protection

A great deal of the current FMVSS (Federal Motor Vehicle Safety Standards) are about this, and because many vehicles may use exemptions from FMVSS to get going, a reminder about this is in order.

10. Data Recording

The most interesting proposal in the prior document was a requirement for public sharing of incident and crash data so that all teams could learn from every problem any team encounters. This would speed up development and improve safety, but vendors don’t like the fact it removes a key competitive edge — their corpus of driving experience.

The new document calls for a standard data format, and makes general motherhood calls for storing data in a crash, something everybody already does.

The call for a standard is actually difficult. Every vehicle has a different sensor suite and its own tools to examine the sensor data. Trying to standardize that on a truly useful level is a serious task. I had expected this task to fall to outside testing companies, who would learn (possibly reverse engineering) the data formats of each car and try to put them in a standard format that was actually useful. I fear a standard agreed upon by major players (who don’t want to share their data) will be minimal and less useful.

State Roles

A large section of the document is about the bureaucratic distribution of roles between states and federal bodies. I will provide analysis of this later.

Conclusion

This document reflects a major change, almost a reversal, and largely a positive one. Going forward from here, I would encourage that the debate on regulation focus on

  • What public goods does the government have an interest in protecting?
  • Which ones are vendors showing they can’t be trusted to support voluntarily, both by present actions and past history?
  • How can innovation be encouraged and facilitated, and good communication be made to the public about what’s going on

One of the key public goods missing from this document is privacy protection. This is one of the areas where vendors don’t have a great past history.
Another one is civil rights protection — for example what powers police will want over cars — where the government has a bad history.

Page 323 of 341
1 321 322 323 324 325 341