Page 409 of 427
1 407 408 409 410 411 427

Overview of the International Conference on Robot Ethics and Safety Standards – with survey on autonomous cars

The International Conference on Robot Ethics and Safety Standards (ICRESS-2017) took place in Lisbon, Portugal, from 20th to 21st October 2017. Maria Isabel Aldinhas Ferreira and João Silva Sequeira coordinated the conference with the aim to create a vibrant multidisciplinary discussion around pressing safety, ethical, legal and societal issues of the rapid introduction of robotic technology in many environments.

There were several fascinating keynote presentations. Mathias Scheutz’ inaugural speech highlighted the need for robots to act in a way that would be perceived as using moral principles and judgement. It was refreshing to see that we could potentially have autonomous robots that arrive at appropriate decisions that would be seen as “right” or “wrong” by an external observer.

On the other hand, Rodolphe Gélin provided the perspective of robot manufacturers, and how difficult the issues of safety have become. The expectations of the public regarding robots seem to go beyond other conventional machines. The discussion was very diverse, and some suggested schemes similar to licensing would be required to qualify humans to operate robots (as they re-train them or re-program them). Other schemes for insurance and liabilities were suggested.

Professional bodies, experts and standards were discussed by the other two keynote presentations. Raja Chatila from the IEEE Global AI Ethics Initiative perspective, and Gurvinder Singh Virk from that of several ISO robot standardisation groups.

The conference also hosted a panel discussion, where interesting issues were debated like the challenges posed by the proliferation of drones in the general public. Such a topic has characteristics different from many other problems societies have faced with the introduction of new technologies. Drones can be 3D printed from many designs with potentially no liability to the designer, they can be operated with virtually no complex training, and they can be controlled from sufficient long distances that recuperating the drone would be insufficient to track the operator/owner. Their cameras and data recording can potentially be used to what some would consider privacy breaches, and they could compete for the space of already operating commercial aviation. It seems unclear what regulations and what bodies are to intervene and even so, how to enforce them. Would something similar happen when the public acquires pet-robots or artificial companions?

The presentations of accepted papers raised many issues, including the difficulties to create legal foundations for liability schemes and the responsibilities attributed to machines and operators. Particular aspects included the fact that for specific tasks, computers do significantly better than the average person (examples are driving a car and negotiation a curve). Other challenges are that humans will be in the proximity of robots on a regular basis in manufacturing or office environments with many new potential risks.

The vibrant nature of the conference concluded that the challenges are emerging much more rapidly than the answers.

We’ve also just launched a survey on software behaviours that an autonomous car should have when faced with difficult decisions. Just click here. Participants may win a prize, participation is completely voluntary and anonymous.

Robocars will make traffic worse before it gets better

Many websites paint a very positive picture of the robocar future. And it is positive, but far from perfect. One problem I worry about in the short term is the way robocars are going to make traffic worse before they get a chance to make it better.

The goal of all robocars is to make car travel more pleasant and convenient, and eventually cheaper. You can’t make something better and cheaper without increasing demand for it, and that means more traffic.

This is particularly true for the early-generation pre-robocar vehicles in the plans of many major automakers. One of the first products these companies have released is sometimes called the “traffic jam assist.” This is a self-driving system that only works at low speed in a traffic jam.

Turns out that’s easy to do, effectively a solved problem. Low speed is inherently easier, and the highway is a simple driving environment without pedestrians, cyclists, intersections or cars going the other way. When you are boxed in with other cars in a jam, all you have to do is go with the flow. The other cars tell you where you need to go. Sometimes it can be complex when you get to whatever is blocking the road to cause the jam, but handoff to a human at low speeds is also fairly doable.

These products will be widely available soon, and they will make traffic jams much more pleasant. Which means there might be more of them.

I don’t have a 9 to 5 job, so I avoid travel in rush hour when I can. If somebody suggests we meet somewhere at 9am, I try to push it to 9:30 or 10. If I had a traffic jam assist car, I would be more willing to take the meeting at 9. When on the way, if I encountered a traffic jam, I would just think, “Ah, I can get some email done.”

After the traffic jam assist systems, come the highway systems which allow you to take your eyes off the road for an extended time. They arrive pretty soon, too. These will encourage slightly longer commutes. That means more traffic, and also changes to real estate values. The corporate-run commuter buses from Google, Yahoo and many other tech companies in the SF Bay Area have already done that, making people decide they want to live in San Francisco and work an hour’s bus ride away in Silicon Valley. The buses don’t make traffic worse, but those doing this in private cars will.

Is it all doom?

Fortunately, some factors will counter a general trend to worse traffic, particularly as full real robocars arrive, the ones that can come unmanned to pick you up and drop you off.

  • As robocars reduce accident levels, that will reduce one of the major causes of traffic congestion.
  • Robocars don’t need to slow down and stare at accidents or other unusual things on the road, which also causes congestion.
  • Robocars won’t overcompensate on “sags” (dips) in the road. This overcompensation on sags is the cause of almost half the traffic congestion on Japanese highways
  • Robocars look like they’ll be mainly electric. That doesn’t do much about traffic, but it does help with emissions.
  • Short-haul “last mile” robocars can actually make the use of trains, buses and carpools vastly more convenient.
  • Having only a few cars which drive more regularly, even something as simple as a good quality adaptive cruise control, actually does a lot to reduce congestion.
  • The rise of single person half-width vehicles promises a capacity increase, since when two find one another on the road, they can share the lane.
  • While it won’t happen in the early days, eventually robocars will follow the car in front of them with a shorter gap if they have a faster reaction time. This increases highway capacity.
  • Early robocars won’t generate a lot of carpooling, but it will pick up fairly soon (see below.)

What not to worry about

There are a few nightmare situations people have talked about that probably won’t happen. Today, a lot of urban driving involves hunting for parking. If we do things right, robocars won’t ever hunt for parking. They (and you) will be able to make an online query for available space at the best price and go directly do it. But they’ll do that after they drop you off, and they don’t need to park super close to your destination the way you need to. To incorporate city spaces into this market, a technology upgrade will be needed, and that may take some time, but private spaces can get in the game quickly.

What also won’t happen is people telling their car to drive around rather than park, to save money. Operating a car today costs about $20/hour, which is vastly more than any hourly priced parking, so nobody is going to do that to save money unless there is literally no parking for many miles. (Yes, there are parking lots that cost more than $20, but that’s because they sell you many hours or a whole day and don’t want a lot of in and out traffic. Robocars will be the most polite parking customers around, hiding valet-style at the back of the lot and leaving when you tell them.)

Another common worry is that people will send their cars on long errands unmanned. That mom might take the car downtown, and send it all the way back for dad to do a later commute, then back to pickup the kids at school. While that’s not impossible, it’s actually not going to be the cheap or efficient thing to do. Thanks to robotaxis, we’re going to start thinking of cars as devices that wear out by the mile, not by the year, and all their costs will be by the mile except parking and $2 of financing per day. All this unmanned operation will almost double the cost of the car, and the use of robotic taxi services (Robocar Uber) will be a much better deal.

There will be empty car moves, of course. But it should not amount to more than 15% of total miles. In New York, taxis are vacant of a passenger for 38% of miles, but that’s because they cruise around all day looking for fares. When you only move when summoned, the rate is much better.

And then it gets better

After this “winter” of increased traffic congestion, the outlook gets better. Aside from the factors listed above, in the long term we get the potential for several big things to increase road capacity.

The earliest is dynamic carpooling, as you see with services like UberPool and LyftLines. After all, if you look at a rush-hour highway, you see that most of the seats going by are empty. Tools which can fill these seats can increase the capacity of the roads close to three times just with the cars that are moving today.

The next is advanced robocar transit. The ability to make an ad-hoc, on-demand transit system that combines vans and buses with last mile single person vehicles in theory allows almost arbitrary capacity on the roads. At peak hours, heavy use of vans and buses to carry people on the common segments of their routes could result in a 10-fold (or even more) increase in capacity, which is more than enough to handle our needs for decades to come.

Next after that is dynamic adaptation of roads. In a system where cities can change the direction of roads on demand, you can get more than a doubling of capacity when you combine it with repurposing of street parking. On key routes, street parking can be reserved only for robocars prior to rush hour, and then those cars can be told they must leave when rush hour begins. (Chances are they want to leave to serve passengers anyway.) Now your road has seriously increased capacity, and if it’s also converted to one-way in the peak direction, you could almost quadruple it.

The final step does not directly involve robocars, since all cars must have a smartphone and participate for it to work. This is the use of smart, internet based road metering. With complete metering, you never get more cars trying to use a road segment than it has capacity to handle, and so you very rarely get traffic congestion. You also don’t get induced demand that is greater than the capacity, solving the bane of transportation planners.

Humanoids 2017 photo contest – vote here

For the first time, Humanoids 2017 is hosting a photo contest. We received 39 photos from robotics laboratories and institutes all over the world, showing humanoids in serious or funny contexts.

A jury composed of Erico Guizzo (IEEE Spectrum), Sabine Hauert (Robohub) and Giorgio Metta (Humanoids 2017 Awards Chair) will select the winning photos among those that will receive the more likes on social media.

The idea of using these media is to increase awareness and interest for humanoids and robotics among the public, as these photos can be easily shared and reach people outside the research and robotics field.

You can see and vote for all the photos on Facebook or below by liking the photos (make sure you look at all of them!). More information about the competition can be found here.

How to start with self-driving cars using ROS

Self-driving cars are inevitable.

In recent years, self-driving cars have become a priority for automotive companies. BMW, Bosch, Google, Baidu, Toyota, GE, Tesla, Ford, Uber and Volvo are investing in autonomous driving research. Also, many new companies have appeared in the autonomous cars industry: Drive.ai, Cruise, nuTonomy, Waymo to name a few (read this post for a list of 260 companies involved in the self-driving industry).

The rapid development of this field has prompted a large demand for autonomous cars engineers. Among the skills required, knowing how to program with ROS is becoming an important one. You just have to visit the robotics-worldwide list to see the large amount of job offers for working/researching in autonomous cars, which demand knowledge of ROS.

Why ROS is interesting for autonomous cars

Robot Operating System (ROS) is a mature and flexible framework for robotics programming. ROS provides the required tools to easily access sensors data, process that data, and generate an appropriate response for the motors and other actuators of the robot. The whole ROS system has been designed to be fully distributed in terms of computation, so different computers can take part in the control processes, and act together as a single entity (the robot).

Due to these characteristics, ROS is a perfect tool for self-driving cars. After all, an autonomous vehicle can be considered as just another type of robot, so the same types of programs can be used to control them. ROS is interesting because:

1. There is a lot of code for autonomous cars already created. Autonomous cars require the creation of algorithms that are able to build a map, localize the robot using lidars or GPS, plan paths along maps, avoid obstacles, process pointclouds or cameras data to extract information, etc… Many algorithms designed for the navigation of wheeled robots are almost directly applicable to autonomous cars. Since those algorithms are already available in ROS, self-driving cars can just make use of them off-the-shelf.

2. Visualization tools are already available. ROS has created a suite of graphical tools that allow the easy recording and visualization of data captured by the sensors, and representation of the status of the vehicle in a comprehensive manner. Also, it provides a simple way to create additional visualizations required for particular needs. This is tremendously useful when developing the control software and trying to debug the code.

3. It is relatively simple to start an autonomous car project with ROS onboard. You can start right now with a simple wheeled robot equipped with a pair of wheels, a camera, a laser scanner, and the ROS navigation stack. You’re set up in a few hours. That could serve as a basis to understand how the whole thing works. Then you can move to more professional setups, like for example buying a car that is already prepared for autonomous car experiments, with full ROS support (like the Dataspeed Inc. Lincoln MKZ DBW kit).
Self-driving car companies have identified those advantages and have started to use ROS in their developments. Examples of companies using ROS include BMW (watch their presentation at ROSCON 2015), Bosch or nuTonomy.

Weak points of using ROS

ROS is not all nice and good. At present, ROS presents two important drawbacks for autonomous vehicles:

1. Single point of failure. All ROS applications rely on a software component called the roscore. That component, provided by ROS itself, is in charge of handling all coordination between the different parts of the ROS application. If the component fails, then the whole ROS system goes down. This implies that it does not matter how well your ROS application has been constructed. If roscore dies, your application dies.

2. ROS is not secure. The current version of ROS does not implement any security mechanism for preventing third parties from getting into the ROS network and reading the communication between nodes. This implies that anybody with access to the network of the car can get to the ROS messaging and kidnap the car behavior.

All those drawbacks are expected to be solved in the newest version of ROS, ROS 2. Open Robotics, the creators of ROS have recently released a second beta of ROS 2 which can be tested here. It is expected there will be a release version by the end of 2017.

In any case, we believe that the ROS-based path to self-driving vehicles is the way to go. That is why, we propose a low budget learning path for becoming a self-driving cars engineer, based on the ROS framework.

Our low cost solution to become a self-driving car engineer

Step 1
First thing you need is to learn ROS. ROS is quite a complex framework to learn and requires dedication and effort. Watch the following video for a list of the 5 best methods to learn ROS. Learning basic ROS will help you understand how to create programs with that framework, and how to reuse programs made by others.

Step 2
Next, you need to get familiar with the basic concepts of robot navigation with ROS. Learning how the ROS navigation stack works will provide you the knowledge of basic concepts in navigation like mapping, path planning or sensor fusion. There is no better way to learn this than taking the ROS Navigation in 5 days course developed by Robot Ignite Academy (disclaimer – this is provided by my company The Construct).

Step 3
Third step would be to learn the basic ROS application to autonomous cars: how to use the sensors available in any standard of autonomous car, how to navigate using a GPS, how to generate an algorithm for obstacle detection based on the sensors data, how to interface ROS with the Can-bus protocol used in all the cars used in the industry…

The following video tutorial is ideal to start learning ROS applied to Autonomous Vehicles from zero. The course teaches how to program a car with ROS for autonomous navigation by using an autonomous car simulation. The video is available for free, but if you want to get the most of it, we recommend you to do the exercises at the same time by enrolling into the Robot Ignite Academy.

Step 4
After the basic ROS for Autonomous Cars course, you should learn more advanced subjects like obstacles and traffic signals identification, road following, as well as coordination of vehicles in cross roads. For that purpose, our recommendation would be to use the Duckietown project at MIT. The project provides complete instructions to physically build a small size town, with lanes, traffic lights and traffic signals, to practice algorithms in the real world (even if at a small scale). It also provides instructions to build the autonomous cars that should populate the town. Cars are based on differential drives and a single camera for sensors. That is why they achieve a very low cost (around 100$ per each car).

Image by Duckietown project

Due to the low monetary requirements, and to the good experience it offers for testing real stuff, the Duckietown project is ideal to start practicing autonomous cars concepts like line following based on vision, detecting other cars, traffic signal-based behavior. Still, if your budget is below that cost, you can use a Gazebo simulation of the Duckietown, and still practice most of the content.

Step 5
Then if you really want to go pro, you need to practice with real life data. For that purpose we propose you install and learn from the Autoware project. This project provides real data obtained from real cars on real streets, by means of ROS bags. ROS bags are logs containing data captured from sensors which can be used in ROS programs as if the programs were connected to the real car. By using those bags, you will be able to test algorithms as if you had an autonomous car to practice with (the only limitation is that the data is always the same and restricted to the situation that happened when it was recorded).

Image by the Autoware project

The Autoware project is an amazing huge project that, apart from the ROS bags, provides multiple state-of-the-art algorithms for localization, mapping, obstacles detection and identification using deep learning. It is a little bit complex and huge, but definitely worth studying for a deeper understanding of ROS with autonomous vehicles. I recommend you to watch the Autoware ROSCON2017 presentation for an overview of the system (will be available in October 2017).

Step 6
Final step would be to start implementing your own ROS algorithms for autonomous cars and testing them in different, realistic situations. Previous steps provided you with real-life situations, the bags were limited to the situations where they were recorded. Now it is time to test your algorithms in different situations. You can use already existing algorithms in a mix of all the steps above, but at some point, you will see that all those implementations lack some things required for your goals. You will have to start developing your own algorithms, and you will need lots of tests. For this purpose, one of the best options is to use a Gazebo simulation of an autonomous car as a testbed for your ROS algorithms. Recently, Open Robotics has released a simulation of cars for Gazebo 8 simulator.

Image by Open Robotics

That simulation based on ROS contains a Prius car model, together with 16 beam lidar on the roof, 8 ultrasonic sensors, 4 cameras, and 2 planar lidar, which you can use to practice and create your own self-driving car algorithms. By using that simulation, you will be able to put the car in as many different situations as you want, checking if your algorithm works on those situations, and repeating as many times as you want until it works.

Conclusion

Autonomous driving is an exciting subject with demand for experienced engineers increasing year after year. ROS is one of the best options to quickly jump into the subject. So learning ROS for self-driving vehicles is becoming an important skill for engineers. We have presented here a full path to learn ROS for autonomous vehicles while keeping the budget low. Now it is your turn to make the effort and learn. Money is not an excuse anymore. Go for it!

Join the Robohub community!

As you know, Robohub is a non-profit dedicated to connecting the robotics community to the public. Over nearly a decade we’ve produced more than 200 podcasts and helped thousands of roboticists communicate about their work through videos and blog posts.

Our website Robohub.org provides free high-quality information, and is seen as a top blog in robotics with nearly 1.5M pageviews every year and 20k followers on social media (facebook, twitter).

If you have a story you would like to share (news, tutorials, papers, conference summaries), please send it to editors@robohub.org and we’ll do our best to help you reach a wide audience.

In addition, we’re currently growing our community of volunteers. If you’re interested in blogging, video/podcasting, moderating discussions, curating news, covering conferences, or helping with sustainability of our non-profit, we would love to hear from you!

Just fill in this very short form.

By joining the community, you’ll be part of a grassroots international organisation. You’ll learn about robotics from the top people in the field, travel to conferences, and will improve your communication skills. More important, you’ll be helping us make sure robotics is portrayed in a high-quality manner to the public.

And thanks to all those who have already joined the community, supported us, or sent us their news!

The five senses of robotics

Healthy humans take for granted their five senses. In order to mold metal into perceiving machines, it requires a significant amount of engineers and capital. Already, we have handed over many of our faculties to embedded devices in our cars, homes, workplaces, hospitals, and governments. Even automation skeptics unwillingly trust the smart gadgets in their pockets with their lives.

Last week, General Motors stepped up its autonomous car effort by augmenting its artificial intelligence unit, Cruise Automation, with greater perception capabilities through the acquisition of LIDAR (Light Imaging, Detection, And Ranging) technology company Strobe. Cruise was purchased with great fanfare last year by GM for a billion dollars. Strobe’s unique value proposition is shrinking its optical arrays to the size of a microchip, thereby substantially reducing costs of a traditionally expensive sensor that is critical for autonomous vehicles measuring the distances of objects on the road. Cruise CEO Kyle Vogt wrote last week on Medium that “Strobe’s new chip-scale LIDAR technology will significantly enhance the capabilities of our self-driving cars. But perhaps more importantly, by collapsing the entire sensor down to a single chip, we’ll reduce the cost of each LIDAR on our self-driving cars by 99%.” 

GM is not the first Detroit automaker aiming to reduce the costs of sensors on the road; last year Ford invested $150 million in Velodyne, the leading LIDAR company on the market. Velodyne is best known for its rotation sensor that is often mistaken for a siren on top of the car. In describing the transaction, Raj Nair, Ford’s Executive Vice President, Product Development and Chief Technical Officer, said “From the very beginning of our autonomous vehicle program, we saw LIDAR as a key enabler due to its sensing capabilities and how it complements radar and cameras. Ford has a long-standing relationship with Velodyne and our investment is a clear sign of our commitment to making autonomous vehicles available for consumers around the world.” As the race heats up for competing perception technologies, LIDAR startups is already a crowded field with eight other companies (below) competing to become the standard vision for autonomous driving.

Walking the halls of Columbia University’s engineering school last week, I visited a number of the robotic labs working on the next generation of sensing technology. Dr. Peter Allen, Professor of Computer Science, is the founder of the Columbia Grasp Database, whimsically called GraspIt!, that enables robots to better recognize and pickup everyday objects. GraspIt! provides “an architecture to enable robotic grasp planning via shape completion.” The open source GraspIt! database has over 440,000 3D representations of household articles from varying viewpoints, which are used to train its “3D convolutional neural network (CNN).” According to the Lab’s IEEE paper published earlier this year, the CNN is able to serve up “a 2.5D pointcloud” capture of “a single point of view” of each item, which then “fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object” (see diagram below). As Dr. Allen demonstrated last week, the CNN is able to perform as successfully in live scenarios with a robots “seeing” an object for the first time, as it does in computer simulations.

Screen Shot 2017-10-22 at 11.29.47 AM.png

Taking a novel approach in utilizing their cloud-based data platform, Allen’s team now aims to help quadriplegics better navigate their world with assistive robots. Typically, a quadriplegic is reliant on human aids to perform even the most basic functions like eating and drinking, however Brain-Computer Interfaces (BCI) offer the promise of independence with a robot. Wearing a BCI helmet, Dr. Allen’s grad student was able to move a robot around the room by just looking at objects on screen. The object on the screen triggers electroencephalogram (EGG) waves that are admitted to the robot which translates the signals into pointcloud images on the database. According to their research, “Noninvasive BCI’s, which are very desirable from a medical and therapeutic perspective, are only able to deliver noisy, low-bandwidth signals, making their use in complex tasks difficult. To this end, we present a shared control online grasp planning framework using an advanced EEG-based interface…This online planning framework allows the user to direct the planner towards grasps that reflect their intent for using the grasped object by successively selecting grasps that approach the desired approach direction of the hand. The planner divides the grasping task into phases, and generates images that reflect the choices that the planner can make at each phase. The EEG interface is used to recognize the user’s preference among a set of options presented by the planner.” Screen Shot 2017-10-22 at 12.10.58 PM.png

While technologies like LIDAR and GraspIt! enable robots to better perceive the human world, in the basement of the SEAS Engineering Building at Columbia University, Dr. Matei Ciocarlie is developing an array of affordable tactile sensors for machines to touch & feel their environments. Humans have very complex multi-modal systems that are built through trial & error knowledge gained since birth. Dr. Ciocarlie ultimately aims to build a robotic gripper that has the capability of a human hand. Using light signals, Dr. Ciocarlie has demonstrated “sub-millimeter contact localization accuracy” of the mass object to determine the force applied to picking it up. At Columbia’s Robotic Manipulation and Mobility Lab (ROAM), Ciocarlie is tackling “one of the key challenges in robotic manipulation” by figuring out how “you reduce the complexity of the problem without losing versatility.” While demonstrating a variety of new grippers and force sensors which are being deployed in such hostile environments as the International Space Station and a human’s cluttered home, the most immediately promising innovation is Ciocarlie’s therapeutic robotic hand (shown below).

According to Ciocarlie’s paper: “Fully wearable hand rehabilitation and assistive devices could extend training and improve quality of life for patients affected by hand impairments. However, such devices must deliver meaningful manipulation capabilities in a small and lightweight package In experiments with stroke survivors, we measured the force levels needed to overcome various levels of spasticity and open the hand for grasping using the first of these configurations, and qualitatively demonstrated the ability to execute fingertip grasps using the second. Our results support the feasibility of developing future wearable devices able to assist a range of manipulation tasks.”

Across the ocean, Dr. Hossam Haick of Technion-Israel Institute of Technology has built an intelligent olfactory system that can diagnose cancer. Dr. Haick explains, “My college roommate had leukemia, and it made me want to see whether a sensor could be used for treatment. But then I realized early diagnosis could be as important as treatment itself.” Using an array of sensors composed of “gold nanoparticles or carbon nanotube” patients breath into a tube that detects cancer biomarkers through smell. “We send all the signals to a computer, and it will translate the odor into a signature that connects it to the disease we exposed to it,” says Dr. Haick. Last December, Haick’s AI reported an 86% accuracy in predicting cancers in more than 1,400 subjects in 17 countries. The accuracy increased with use of its neural network in specific disease cases. Haick’s machine could one day have better olfactory senses than canines, which have been proven to be able sniff out cancer.

When writing this post on robotic senses, I had several conversations with Alexa and I am always impressed with her auditory processing skills. It seems that the only area in which humans will exceed robots will be taste; however I am reminded of Dr. Hod Lipson’s food printer. As I watched Lipson’s concept video of the machine squirting, layering, pasting and even cooking something that resembled Willie Wonka’s “Everlasting Gobstopper,” I sat back in his Creative Machines Lab realizing that Sci-Fi is no longer fiction.

Want to know more about LIDAR technology and self-driving systems? Join RobotLab’s next forum on “The Future of Autonomous Cars” with Steve Girsky formerly of General Motors – November 29th @ 6pm, WeWork Grand Central NYC, RSVP

Stifled ambitions: a review of Google robotics

Despite recent attempts to tease the robotics projects incubating at its Google X skunkworks, industry observers say that Google has done more to stifle than advance innovation in robotics.

A bit of history

On December 4th, 2013, John Markoff, a technology reporter for The New York Times, broke the story that Google had acquired seven robotic companies and that Andy Rubin, of Android fame, would be heading the group. Schaft, a Japanese start-up developing a humanoid robot; Industrial Perception, a Silicon Valley start-up that developed a computer vision system for loading and unloading trucks; Meka Robotics, a robot developer for academia; Redwood Robotics, a start-up intended to compete with the Baxter robot (and others) entering the small and medium-sized shop and factory marketplace; Bot & Dolly, a maker of robotic camera systems used for special effects such as in the movie “Gravity;” Autofuss, a design and marketing firm and a partner in Bot & Dolly; and Holomni, a maker of powered caster modules for omnidirectional vehicles.

On December 14th, 2013, Markoff followed up with the news that Google had added to its new stable of robotic companies by acquiring Boston Dynamics, a 20-year old developer of mobile and off-road robotics and human simulation technology mostly for DARPA and the Department of Defense.

Thus some of the leading startups in the industry and the whole 80+ talent pool from Boston Dynamics became part of Google. According to Markoff quoting Rubin,

“The company’s initial market will be in manufacturing, e.g., electronics assembly which is mostly done by hand. Manufacturing and logistics markets not being served by today’s robotic technologies are clear opportunity markets and the new Google robots will be able to automate any or all of the processes from the supply chain to the distribution channels to the consumer’s front door thereby creating a massive opportunity.”

The eight acquisitions were the talk of the business news but one piece from SFGate Tech Chronicles quoting Brian Gerkey described the positive sentiment best:

“Google’s move into robotics is likely to draw renewed attention and money into the space. It’s a pretty exciting day for robotics when someone like Google makes an investment like that in robots, others are likely to follow suit. It can only spur investment and innovation.”

Sales, vesting, transfers and departures

From 2014 to 2016 there were rumors and some media attention to the difficulties Google was having coordinating the talents within their robotic acquisitions, particularly after Rubin left and Google let it be known that they wanted to sell off Boston Dynamics because “their two- and four-legged robots were too far from being marketable.” Commercial projects were discussed but none came to fruition. People started to leave or move over to other parts of Google and GoogleX. Of the people that came from the 2013 acquisitions, some have left saying that the lack of direction and management had thwarted their desire to stay and contribute. Others have stayed but let it be known they’ll be available after their four-year sign-on stock options vest which will happen later this year.

In 2015 and 2016 the news of dissent continued as the name and group managers changed. No products emerged. James Kuffner led the robotics group until he left in January, 2016 to become the CTO of the Toyota Research Institute. Aaron Edsinger followed, but then he too left.

In June of 2017 Google finally found a buyer for Boston Dynamics – SoftBank. SoftBank also bought Schaft, a Japan-based startup whose walking robots never integrated into Google and, like Boston Dynamics, operated separately from the rest of Google’s robotics acquisitions.

For SoftBank, which acquired Aldebaran in 2012 and helped them make the humanoid robot Pepper, the two acquisitions offer mobility and robotics talent as SoftBank grows its SoftBank Robotics joint venture with Alibaba and Foxconn.

Google made a mess

In a blistering recap of Google’s involvement in robotics from 2013 until now, Mark Bergen and Joshua Brustein wrote in BloombergBusinessweek Magazine:

“None of the acquired companies have robots in use beyond the offices of Google’s now-parent company, Alphabet Inc. At least three key robotics chiefs who joined in that 2013 wave left the company in the last few months, and, because four years is the typical vesting period for Google stock options, they probably won’t be the last. At this point, the exodus counts as a win for robotics, since many of the brightest minds in the field have essentially spent the past few years trapped in a time capsule. Ultimately, Google’s run on roboticists held the industry back more than moving it forward.”

Google today

Google hasn’t entirely receded from robotics. There is Waymo, Alphabet’s autonomous-car division, and Titan (renamed Project Loon), Google’s drone development project. In fact, Project Loon balloons are being used today in Puerto Rico to provide stationary platforms from which emergency Internet and web services are available. Additionally, GoogleX has operated their “arm farm,” a room full of robotic arms that are learning to grasp and manipulate random objects and teach the other robots in the room how to do the same. In a recent GoogleX blog, Hans Peter Brondmo, who now heads up Google’s remaining robotics team, said:

We’re working with the Google Brain team to explore how to teach robots new skills by learning from their shared experience, and we’re even simulating robots in the cloud so they can train fast, and then we’re transferring this learning onto real robots. By having virtual robots we can gather lots of data for training in the cloud. Then we transfer what the virtual robots learn to the real-world robots so they can quickly learn to perform new tasks. This is all critical research that will pave the (long) path toward building machines that can learn new skills quickly and operate safely and cost effectively in the world we live in.”

Bottom line

Rumors abound about the value of Google’s ant farm (the room where a dozen or more robot arms are using machine learning and swarm control to pick and place random objects which Brondmo described in his blog). But so far, except for those that have left or moved over to Waymo, nothing but research has come from the 3-1/2 years Google has had a robotics group. Unless you count the years of anxiety, secrecy and disappointment for all those super-smart entrepreneurs that initially had hoped for so much of their new association with Google.

But as Brian Gerkey said, “…when someone like Google makes an investment like that in robots, others are likely to follow suit. It can only spur investment and innovation.” That has happened in spades! 2015, 2016 and 2017 have all shown exponential growth in investments and strategic acquisitions for the robotics industry.

Robots on the Rise


NEDO, Japan’s New Energy and Industrial Technology Development Organization, is a regular funder of robotic technology, has an office in Silicon Valley, and participates in various regional events to promote its work and future programs. One such event was Robots on the Rise: The Future of Robotics in Japan and the US held October 16th in Mountain View, CA and jointly sponsored by Silicon Valley Forum.

Over 400 people attended the all-day series of panels with well-known speakers and relevant subject matter. Panels covered mobility, agricultural robotics, search and rescue, and the retail and manufacturing revolutions. Henrick Christensen from UC San Diego gave an overview of robotics in Japan and the US as a keynote. He described the key drivers propelling the robotics industry forward and the digitization of manufacturing: mass customization, unmanned vehicles, the aging society (particularly in Japan), and the continuing need for application-specific integrators.

He was followed by Atsushi Yasuda from METI, Japan’s Ministry of Economy, Trade and Industry (the agency that funds NEDO) who emphasized Japan’s need to focus on technologies that can safely assist their aging population. Manufacturing, agriculture, nursing and medical care, plus disaster relief were points he detailed.

I was the moderator of a panel on The Manufacturing Revolution: Automated Factories with speakers from Yaskawa (Chetan Kapoor), Yamaha (Hiro Sauou), OMRON/Adept (Edwardo De Robbio), GE (Steve Taub) and VEO Robotics (Patrick Sobalvarro). Trends in this arena are being driven by the global movement toward mass customization and the need for flexibility in automation and robotics. For the next while that flexibility will use humans in the loop to collaborate with their robot counterparts.

There was also an exhibition with around 25 companies and agencies participating in a pop-up type of trade show. It was noisy, fun and informative.

Best line from the investment panel: “Invest in missionaries; not mercenaries.” 

Second best line came from Henrik Christensen regarding measuring the successfulness of home robots by their “time to boredom.”

Most interesting question and answer about the future came from James Kuffner, the CTO of Toyota Research Institute who said that Toyota asked the Institute what the company should to do after self-driving reduces the size of the car industry. Kuffner said that Toyota decided to “pivot to robotics and particularly to assistance robots for health, elder and home care.”

In the panel on unmanned vehicles, the consensus was that mapping, proprietary driving data, regulation and weather were all holdups thwarting fully autonomous vehicles (Level 5 vehicles (without pedals or a steering wheel)). Because of those problems, it was their opinion that only Level 4 would be achieved in the next decade.

NEDO’s 2017 fundings total $1.17 billion and include $99.1 million for robot technology seed and mid-term fundings for practical robotic solutions. Current projects include infrastructure inspection and maintenance, disaster response robots, elder care robots, and next-generation technologies in industrial and service robots and AI.

Talking Machines: The pace of change and the public view of machine learning, with Peter Donnelly


In episode ten of season three we talk about the rate of change (prompted by Tim Harford), take a listener question about the power of kernels, and talk with Peter Donnelly in his capacity with the Royal Society’s Machine Learning Working Group about the work they’ve done on the public’s views on AI and ML.

If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Shaping animal, vegetable and mineral

The face of the father of quantum physics, Max Planck, emerges from a flat disk. In each state, the colors show the growth factors of the top (left) and bottom (right) layer, and the thin black lines indicate the direction of growth. The top layer is viewed from the front, and the bottom layer is viewed from the back, to highlight the complexity of the geometries. Credit: Harvard SEAS

By Leah Burrows

Nature has a way of making complex shapes from a set of simple growth rules. The curve of a petal, the swoop of a branch, even the contours of our face are shaped by these processes. What if we could unlock those rules and reverse engineer nature’s ability to grow an infinitely diverse array of shapes?

Scientists from Harvard’s Wyss Institute for Biologically Inspired Engineering and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have done just that. In a paper published in the Proceedings of the National Academy of Sciences, the team demonstrates a technique to grow any target shape from any starting shape.

“Architect Louis Sullivan once said that ‘form ever follows function’,” said L. Mahadevan, Ph.D., Associate Faculty member at the Wyss Institute and the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology and of Physics and senior author of the study. “But if one took the opposite perspective, that perhaps function should follow form, how can we inverse design form?”

In previous research, the Mahadevan group used experiments and theory to explain how naturally morphing structures — such as Venus flytraps, pine cones and flowers — changed their shape in the hopes of one day being able to control and mimic these natural processes. And indeed, experimentalists have begun to harness the power of simple, bioinspired growth patterns. For example, in 2016, in a collaboration with the group of Jennifer Lewis, a Wyss Institute Core Faculty member and the Hansjörg Wyss Professor of Biologically Inspired Engineering at SEAS, the team printed a range of structures that changed its shape over time in response to environmental stimuli.

“The challenge was how to do the inverse problem,” said Wim van Rees, Ph.D., a postdoctoral fellow at SEAS and first author of the paper. “There’s a lot of research on the experimental side but there’s not enough on the theoretical side to explain what’s actually happening. The question is, if I want to end with a specific shape, how do I design my initial structure?”

Inspired by the growth of leaves, the researchers developed a theory for how to pattern the growth orientations and magnitudes of a bilayer, two different layers of elastic materials glued together that respond differently to the same stimuli. By programming one layer to swell more and/or in a different direction than the other, the overall shape and curvature of the bilayer can be fully controlled. In principle, the bilayer can be made of any material, in any shape, and respond to any stimuli from heat to light, swelling, or even biological growth.

The team unraveled the mathematical connection between the behavior of the bilayer and that of a single layer.

“We found a very elegant relationship in a material that consists of these two layers,” said van Rees. “You can take the growth of a bilayer and write its energy directly in terms of a curved monolayer.”

That means that if you know the curvatures of any shape you can reverse engineer the energy and growth patterns needed to grow that shape using a bilayer.

“This kind of reverse engineering problem is notoriously difficult to solve, even using days of computation on a supercomputer,” said Etienne Vouga, Ph.D., former postdoctoral fellow in the group and now an Assistant Professor of Computer Science at the University of Texas at Austin. “By elucidating how the physics and geometry of bilayers are intimately coupled, we were able to construct an algorithm that solves the needed growth pattern in seconds, even on a laptop, no matter how complicated the target shape.”

A snapdragon flower petal starting from a cylinder. In each state, the colors show the growth factors of the top (left) and bottom (right) layer, and the thin black lines indicate the direction of growth. The top layer is viewed from the front, and the bottom layer is viewed from the back, to highlight the complexity of the geometries. Credit: Harvard SEAS

The researchers demonstrated the system by modeling the growth of a snapdragon flower petal from a cylinder, a topographical map of the Colorado river basin from a flat sheet and, most strikingly, the face of Max Planck, one of the founders of quantum physics, from a disk.

“Overall, our research combines our knowledge of the geometry and physics of slender shells with new mathematical algorithms and computations to create design rules for engineering shape,” said Mahadevan. “It paves the way for manufacturing advances in 4-D printing of shape-shifting optical and mechanical elements, soft robotics as well as tissue engineering.”

The researchers are already collaborating with experimentalists to try out some of these ideas.

This research was funded in part by the Swiss National Science Foundation and the US National Science Foundation.

Could we build a Blade Runner-style ‘replicant’?

Sony Pictures

The new Blade Runner sequel will return us to a world where sophisticated androids made with organic body parts can match the strength and emotions of their human creators. As someone who builds biologically inspired robots, I’m interested in whether our own technology will ever come close to matching the “replicants” of Blade Runner 2049.

The reality is that we’re a very long way from building robots with human-like abilities. But advances in so-called soft robotics show a promising way forward for technology that could be a new basis for the androids of the future.

From a scientific point of view, the real challenge is replicating the complexity of the human body. Each one of us is made up of millions and millions of cells, and we have no clue how we can build such a complex machine that is indistinguishable from us humans. The most complex machines today, for example the world’s largest airliner, the Airbus A380, are composed of millions of parts. But in order to match the complexity level of humans, we would need to scale this complexity up about a million times.

There are currently three different ways that engineering is making the border between humans and robots more ambiguous. Unfortunately, these approaches are only starting points, and are not yet even close to the world of Blade Runner.

There are human-like robots built from scratch by assembling artificial sensors, motors and computers to resemble the human body and motion. However, extending the current human-like robot would not bring Blade Runner-style androids closer to humans, because every artificial component, such as sensors and motors, are still hopelessly primitive compared to their biological counterparts.

There is also cyborg technology, where the human body is enhanced with machines such as robotic limbs, wearable and implantable devices. This technology is similarly very far away from matching our own body parts.

Sony Pictures

Finally, there is the technology of genetic manipulation, where an organism’s genetic code is altered to modify that organism’s body. Although we have been able to identify and manipulate individual genes, we still have a limited understanding of how an entire human emerges from genetic code. As such, we don’t know the degree to which we can actually programme code to design everything we wish.

Soft robotics: a way forward?

But we might be able to move robotics closer to the world of Blade Runner by pursuing other technologies, and in particular by turning to nature for inspiration. The field of soft robotics is a good example. In the last decade or so, robotics researchers have been making considerable efforts to make robots soft, deformable, squishable and flexible.

This technology is inspired by the fact that 90% of the human body is made from soft substances such as skin, hair and tissues. This is because most of the fundamental functions in our body rely on soft parts that can change shape, from the heart and lungs pumping fluid around our body to the eye lenses generating signals from their movement. Cells even change shape to trigger division, self-healing and, ultimately, the evolution of the body.

The softness of our bodies is the origin of all their functionality needed to stay alive. So being able to build soft machines would at least bring us a step closer to the robotic world of Blade Runner. Some of the recent technological advances include artificial hearts made out of soft functional materials that are pumping fluid through deformation. Similarly, soft, wearable gloves can help make hand grasping stronger. And “epidermal electronics” has enabled us to tattoo electronic circuits onto our biological skins.

Softness is the keyword that brings humans and technologies closer together. Sensors, motors and computers are all of a sudden integrated into human bodies once they became soft, and the border between us and external devices becomes ambiguous, just like soft contact lenses became part of our eyes.

Nevertheless, the hardest challenge is how to make individual parts of a soft robot body physically adaptable by self-healing, growing and differentiating. After all, every part of a living organism is also alive in biological systems in order to make our bodies totally adaptable and evolvable, the function of which could make machines totally indistinguishable from ourselves.

The ConversationIt is impossible to predict when the robotic world of Blade Runner might arrive, and if it does it will probably be very far in the future. But as long as the desire to build machines indistinguishable from humans is there, the current trends of robotic revolution could make it possible to achieve that dream.

Fumiya Iida, Lecturer in mechatronics, University of Cambridge

This article was originally published on The Conversation. Read the original article.

What is Catapult Launching of Drones

This is one of the methods to launch airplane drones, because airplanes need initial speed in order to fly. Catapults are used in order to throw airplanes into the air easily and quickly, where there might not be enough distance to speed up, or the drone might not have the gear to speed up (which saves weight and control systems). Catapult launched airplanes will need additional reinforcement in order to withstand the throwing force from catapult.

 

 

 

 

 

 

 

 

 

Ask, discuss anything about robots and drones in our forums

See our Robot Book

This post was originally written by RoboticMagazine.com and displaying without our permission is not allowed.

The post What is Catapult Launching of Drones appeared first on Roboticmagazine.

Robohub Podcast

I am happy to announce that Robots Podcast will be renamed to “Robohub Podcast“.

This name change is to avoid confusion about how the podcast and Robohub relate, a question we frequently get. The answer is that they are part of the same effort to connect the global robotics community to the world — and they were founded by many of the same people.

The podcast began in 2006 as “Talking Robots” and was launched by Dr. Dario Floreano at EPFL in Switzerland and his PhD students. Several of those PhD students then went on to launch the “Robots Podcast”, which will celebrate its 250th episode at the end of this year (make sure to check the whole playlist)! Robohub came a few years later as an effort to bring together all the communicators in robotics under one umbrella to provide free, high-quality information about robotics. Robohub has supported the podcast over the years by advising us, making connections for interviews, and sponsoring us to attend conferences.

Going forward, I am happy that our new name will show our close relationship to Robohub, and I look forward to many more interviews.

 

Happy listening!

Audrow Nash

Podcast Director, Robohub

Robohub Podcast #245: High-Performance Autonomous Vehicles, with Chris Gerdes



In this episode, Audrow Nash interviews Chris Gerdes, Professor of Mechanical Engineering at Stanford University, about designing high-performance autonomous vehicles. The idea is to make vehicles safer, as Gerdes says, he wants to “develop vehicles that could avoid any accident that can be avoided within the laws of physics.”

In this interview, Gerdes discusses developing a model for high-performance control of a vehicle; their autonomous race car, an Audi TTS named ‘Shelley,’ and how its autonomous performance compares to ameteur and professional race car drivers; and an autonomous, drifting Delorean named ‘MARTY.’

Chris Gerdes

Chris Gerdes is a Professor of Mechanical Engineering at Stanford University, Director of the Center for Automotive Research at Stanford (CARS) and Director of the Revs Program at Stanford. His laboratory studies how cars move, how humans drive cars and how to design future cars that work cooperatively with the driver or drive themselves. When not teaching on campus, he can often be found at the racetrack with students, instrumenting historic race cars or trying out their latest prototypes for the future. Vehicles in the lab include X1, an entirely student-built test vehicle, and Shelley, an Audi TT-S capable of turning a competitive lap time around the track without a human driver. Professor Gerdes and his team have been recognized with a number of awards including the Presidential Early Career Award for Scientists and Engineers, the Ralph Teetor award from SAE International and the Rudolf Kalman Award from the American Society of Mechanical Engineers.

 

Links

 

 

Udacity Robotics video series: Interview with Felipe Chavez from Kiwi


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Felipe Chavez, Co-Founder and CEO of Kiwi. Kiwi is a mobile robot company delivering food to hungry college students across University of California, Berkeley’s campus. Listen to Felipe explain some of the challenges Kiwi faces when deploying their robots.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

Page 409 of 427
1 407 408 409 410 411 427