Category Robotics Classification

Page 426 of 429
1 424 425 426 427 428 429

Finally! Google sells Boston Dynamics to SoftBank

Spotmini by Boston Dynamics. Source: Boston Dynamics/YouTube

In a long-awaited transaction, The New York Times Dealbook announced that SoftBank was buying Boston Dynamics from Alphabet (Google). Also included in the deal is the Japanese startup Schaft. Acquisition details were not disclosed.

Both Boston Dynamics and Schaft were acquired by Google when Andy Rubin was developing Google’s robot group through a series of acquisitions. Both companies have continued to develop innovative mobile robots. And both have been on Google’s for sale list.

Boston Dynamics, a DARPA and DoD-funded 25 year old company, designed two and four-legged robots for the military. Videos of BD’s robots WildCat, Big Dog, Cheetah, SpotMini (shown above getting into an elevator) and Handle, have been YouTube hits for years. Handle, BD’s most recent is a two-wheeled, four-legged hybrid robot that can stand, walk, hop, run and roll at up to 9 MPH.

Schaft, a Japanese startup/participant in the DARPA Robotic Challenge, recently unveiled a two-legged robot that climbed stairs, can carry 125 pounds of payload, move in tight spaces and keep its balance throughout.

SoftBank, through another acquisition (of French Aldabaran, the maker of the Nao and Romeo robots), and in a joint venture with Foxconn and Alibaba, has developed and marketed thousands of Pepper robots. Pepper is a cute, humanoid, mobile robot being marketed and used as a guide and sales assistant. The addition of Boston Dynamics and Schaft to the SoftBank stable add talent and technology to their growing robotics efforts, particularly the Tokyo-based Schaft.

Today, there are many issues we still cannot solve by ourselves with human capabilities. Smart robotics are going to be a key driver of the next stage of the information revolution,” said Masayoshi Son, chairman and chief executive of SoftBank.

Teaching ROS quickly to students

Lecturer Steffen Pfiffner of University of Weingarten in Germany is teaching ROS to 26 students at the same time at a very fast pace. His students, all of them within the Master on Computer Science of University of Weingarten, use only a web browser. They connect to a web page containing the lessons, a ROS development environment and several ROS based simulated robots. Using the browser, Pfiffner and his colleague Benjamin Stähle, are able to teach how to program with ROS quickly and to many students. This is what Robot Ignite Academy is made for.

“With Ignite Academy our students can jump right into ROS without all the hardware and software setup problems. And the best: they can do this from everywhere,” says Pfiffner.

Robot Ignite Academy provides a web service which contains the teaching material in text and video format, the simulations of several ROS based robots that the students must learn to program, and the development environment required to build ROS programs and test them on the simulated robot.

Student’s point of view

Students bring their own laptops to the class and connect to the online platform. From that moment, their laptop becomes a ROS development machine, ready to develop programs for many simulated real robots.

The Academy provides the text, the videos and the examples that the student has to follow. Then, the student creates her own ROS program and makes the robot perform a specific action. The student develops the ROS programs as if she is in a typical ROS development computer.

The main advantage is that students can use a Windows, Linux or Mac machine to learn ROS. They don’t even have to install ROS in their computers. The only prerequisite of the laptop is to have a browser. So students do not mess with all the installation problems that frustrate them (and the teachers!), especially when they are starting.

After class, students can continue with their learning at home, library or even the beach if there is a wifi available! All their code, learning material and simulations are stored online so they can access them from anywhere, anytime using any computer.

Teacher’s point of view

The advantage of using the platform is not only for the students but also for the teachers. Teachers do not have to create the material and maintain it. They do not have to prepare the simulations or work on multiple different computers. They don’t even have to prepare the exams!! (which are already provided by the platform).

So what are the teachers for?

By making use of the provided material, the teacher can concentrate on guiding the students by explaining the most confusing parts, answer questions, suggest modifications according to the level of each student, and adapt the pace to the different types of students.

This new method of teaching ROS is exploding among the Universities and High Schools that want to provide the latest and most practical teachings to their students. The method, developed by Robot Ignite Academy, combines a new way of teaching based on practice and an online learning platform. Those two points combined make the teaching of ROS a smooth experience and can potentially see the students’ knowledge base skyrocket.

As user Walace Rosa indicates in his video comment about Robot Ignite Academy:

It is a game changer [in] teaching ROS!

The method is becoming very popular in the robotics circuits too, and many teachers are using it for younger students. For example, High School Mundet in Barcelona is using it to teach ROS to 15 years old students.

Additionally, the academy provides a free online certification exam with different levels of knowledge certification. Many Universities are using this exam to certify that their students did learn the material since the exam is quite demanding.

Some examples of past events

  •  1 week ROS course in Barcelona for SMART-E project team members. This is a private course given by Robot Ignite Academy at Barcelona for 15 members of the SMART-E project that need to be up to speed with ROS fast. From 8th to 12nd of May 2017
  •  1 day ROS course for the Col·legi d’Enginyers de Barcelona. The 17th of May 2017.
  •  3 months course for University of La Salle in Barcelona within the Master on Automatics, Domotics and Robotics. From 10th of May to 29th of June 2017.
  •  1 weekend ROS course for teenagers in Bilbao, Spain. The 20th and 21st of May 2017.
  •  We can also organize a special event like these for you and your team.

Helpful ROS videos

Mori: A modular origami robot

Mori pictured in a hand as scale

The fields of modular and origami robotics have become increasingly popular in recent years, with both approaches presenting particular benefits, as well as limitations, to the end user. Christoph Belke and Jamie Paik from RRL, EPFL and NCCR Robotics have recently proposed an elegant new solution that integrates both types of robotics in order to overcome their individual limitations: Mori, a modular origami robot.

Mori is the first example of a robot that combines the concepts behind both origami robots and reconfigurable, modular robots. Origami robotics utilises folding of thin structures to produce single robots that can change their shape, while modular robotics uses large numbers of individual entities to reconfigure the overall shape and address diverse tasks. Origami robots are compact and light-weight but have functional restrictions related to the size and shape of the sheet and how many folds can be created. By contrast, modular robots are more flexible when it comes to shape and configuration, but they are generally bulky and complex.

Singular module

Mori, an origami robot that is modular, merges the benefits of these two approaches and eliminates some of their drawbacks. The presented prototype has the quasi-2D profile of an origami robot (meaning that it is very thin) and the flexibility of a modular robot. By developing a small and symmetrical coupling mechanism with a rotating pivot that provides actuation, each module can be attached to another in any formation. Once connected, the modules can fold up into any desirable shape.

The individual modules have a triangular structure with dimensions of just 6 mm in thickness, 70 mm in width and 26 g in weight. Contained within this slender structure are actuators, sensors and an on-board controller. This means that the only external input required for full functionality is a power source. The researchers at EPFL have thereby managed to create a robot that has the thin structure of an origami robot as well as the functional flexibility of a modular system.

The prototype presents a highly adaptive modular robot and has been tested in three scenarios that demonstrate the system’s flexibility. Firstly, the robots are assembled into a reconfigurable surface, which changes its shape according to the user’s input. Secondly, a single module is manoeuvred through a small gap, using rubber rings embedded into the rotating pivot as wheels, and assembled on the other side into a container. Thirdly the robot is coupled with feedback from an external camera, allowing the system to manipulate objects with closed-loop control.Mori as a manoeuverable surface

With Mori, the researchers have created the first robotic system that can represent reconfigurable surfaces of any size in three dimensions by using quasi-2D modules. The system’s design is adaptable to whatever task required, be that modulating its shape to repair damage to a structure in space, moulding to a limb weakened after injury in order to provide selective support or reconfiguring user interfaces, such as changing a table’s surface to represent geographical data. The opportunities are truly endless.

Reference

Christoph H. Belke and Jamie Paik, “Mori: A Modular Origami Robot“, IEEE/ASME Transactions on Mechatronics, doi:10.1109/TMECH.2017.2697310

ICRA 2017 in Singapore: Recap

Image: ICRA 2017

ICRA, the IEEE International Conference on Robotics and Automation, is an annual academic conference covering advances in robotics. It is one of the premier conferences in its field. This year, I was invited to attend to its 2017 edition in Singapore.

With a superb organization and a beautiful location, the event included conferences of leading researchers and companies from all around the world, as well as workshops and an exhibitors area. This latter is where I spent most of my time, as I love direct interaction with the companies and research centres. Also, in this kind of academic events, compared to trade fairs, you usually have the chance to directly find technical people who are able to explain in deep detail all the ins and outs of their products.

The robotics community is not so big yet, so we still know each other. I had the pleasure to meet good friends from companies like Infinium Robotics, PAL Robotics and Fetch Robotics between others. Infinium Robotics is the company in Singapore where I work as CTO. I already wrote about this great company in one of my previous posts: “Infinium Robotics. Flying Robots“.

PAL Robotics is a company in Spain well known for having developed some of the best humanoid robots in the world. I have a very good relationship with this companies team since more than ten years ago. Great people, well motivated, well managed, that has been able to look outside of the box and enter with bravery in the world of the robotics warehouse solutions with robots like Tiago or StockBot.

Fetch Robotics is also one of the big players in the robotics solutions for warehousing industry. But I met other interesting people and had amazing chats with people from companies so key in this field as: Amazon RoboticsDJI or Clearpath Robotics. At the end of this post, there is a full list of exhibiting companies.

I saw interesting technology, like the rehabilitation exoskeletons from Fourier Intelligence (Shanghai), the Spidar-G 6DOF Haptic interface from Tokyo Institute of Technology, the Haptic systems of Force Dimension and Moog, the dexterous manipulators of Kinova, Kuka, ABB, ITRI, the modular robot components of HEBI Robotics, Keyi Technology, the D motion capture technologies of Phoenix Technologies and Optitrack, the educational solutions of Ubitech, GT Robot or Robotis and many, many others, most of them ROS enabled.

As I usually do at these events, I recorded a video of the exhibition area to provide an idea of the technologies shown there.

Last but not least, I want to thank my friends from SIAA (Singapore International Automation Association) for their kind friendship and support. They also organized also the Singapore Pavilion in this event.

Ms LIM Sue Yin, Civic Seh (both SIAA) and Alejandro Alonso (IR/Hisparob VP)
List of exhibitors:

No more playing games: AlphaGo AI to tackle some real world challenges

Playing Go. Image: CC0

Humankind lost another important battle with artificial intelligence (AI) last month when AlphaGo beat the world’s leading Go player Ke Jie by three games to zero.

AlphaGo is an AI program developed by DeepMind, part of Google’s parent company Alphabet. Last year it beat another leading player, Lee Se-dol, by four games to one, but since then AlphaGo has substantially improved.

Ke Jie described AlphaGo’s skill as “like a God of Go”.

AlphaGo will now retire from playing Go, leaving behind a legacy of games played against itself. They’ve been described by one Go expert as like “games from far in the future”, which humans will study for years to improve their own play.

Ready, set, Go

Go is an ancient game that essentially pits two players – one playing black pieces the other white – for dominance on board usually marked with 19 horizontal and 19 vertical lines.

A typical game of Go: simple to learn but a lifetime to master. Flickr/Alper Cugun, CC BY

Go is a far more difficult game for computers to play than chess, because the number of possible moves in each position is much larger. This makes searching many moves ahead – feasible for computers in chess – very difficult in Go.

DeepMind’s breakthrough was the development of general-purpose learning algorithms that can, in principle, be trained in more societal-relevant domains than Go.

DeepMind says the research team behind AplhaGo is looking to pursue other complex problems, such as finding new cures for diseases, dramatically reducing energy consumption or inventing revolutionary new materials. It adds:

If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We can’t wait to see what comes next.

This does open up many opportunities for the future, but challenges still remain.

Neuroscience meets AI

AlphaGo combines the two most powerful ideas about learning to emerge from the past few decades: deep learning and reinforcement learning. Remarkably, both were originally inspired by how biological brains learn from experience.

In the human brain, sensory information is processed in a series of layers. For instance, visual information is first transformed in the retina, then in the midbrain, and then through many different areas of the cerebral cortex.

This creates a hierarchy of representations where simple, local features are extracted first, and then more complex, global features are built from these.

The AI equivalent is called deep learning; deep because it involves many layers of processing in simple neuron-like computing units.

But to survive in the world, animals need to not only recognise sensory information, but also act on it. Generations of scientists and psychologists have studied how animals learn to take a series of actions that maximise their reward.

This has led to mathematical theories of reinforcement learning that can now be implemented in AI systems. The most powerful of these is temporal difference learning, which improves actions by maximising its expectation of future reward.

The best moves

By combining deep learning and reinforcement learning in a series of artificial neural networks, AlphaGo first learned human expert-level play in Go from 30 million moves from human games.

But then it started playing against itself, using the outcome of each game to relentlessly refine its decisions about the best move in each board position. A value network learned to predict the likely outcome given any position, while a policy network learned the best action to take in each situation.

Although it couldn’t sample every possible board position, AlphaGo’s neural networks extracted key ideas about strategies that work well in any position. It is these countless hours of self-play that led to AlphaGo’s improvement over the past year.

Unfortunately, as yet there is no known way to interrogate the network to directly read out what these key ideas are. Instead we can only study its games and hope to learn from these.

This is one of the problems with using such neural network algorithms to help make decisions in, for instance, the legal system: they can’t explain their reasoning.

We still understand relatively little about how biological brains actually learn, and neuroscience will continue to provide new inspiration for improvements in AI.

Humans can learn to become expert Go players based on far less experience than AlphaGo needed to reach that level, so there is clearly room for further developing the algorithms.

Also much of AlphaGo’s power is based on a technique called back-propagation learning that helps it correct errors. But the relationship between this and learning in real brains is still unclear.

What’s next?

The game of Go provided a nicely constrained development platform for optimising these learning algorithms. But many real world problems are messier than this and have less opportunity for the equivalent of self-play (for instance self-driving cars).

So are there problems to which the current algorithms can be fairly immediately applied?

One example may be optimisation in controlled industrial settings. Here the goal is often to complete a complex series of tasks while satisfying multiple constraints and minimising cost.

The ConversationAs long as the possibilities can be accurately simulated, these algorithms can explore and learn from a vastly larger space of outcomes than will ever be possible for humans. Thus DeepMind’s bold claims seem likely to be realised, and as the company says, we can’t wait to see what comes next.

This article was originally published on The Conversation. Read the original article.

AI for Good Global Summit welcomes “new frontier” for sustainable development

The world’s brightest minds in Artificial Intelligence (AI) and humanitarian action will meet with industry leaders and academia at the AI for Good Global Summit, 7-9 June 2017, to discuss how AI will assist global efforts to address poverty, hunger, education, healthcare and the protection of our environment. The event will in parallel explore means to ensure the safe, ethical development of AI, protecting against unintended consequences of advances in AI.

View the live webcast at: http://bit.ly/AI-for-Good-Webcast.

The event is co-organized by ITU and the XPRIZE Foundation, in partnership with 20 other United Nations (UN) agencies, and with the participation of more than 70 leading companies and academic and research institutes.

“Artificial Intelligence has the potential to accelerate progress towards a dignified life, in peace and prosperity, for all people,” said UN Secretary-General António Guterres. “The time has arrived for all of us – governments, industry and civil society – to consider how AI will affect our future. The AI for Good Global Summit represents the beginnings of our efforts to ensure that AI charts a course that will benefit all of humanity.”

The AI for Good Global Summit will emphasize AI’s potential to contribute to the pursuit of the UN Sustainable Development Goals.

Opening sessions will share expert insight into the state of play in AI, with leading minds in AI giving voice to their greatest ambitions in driving AI towards social good. ‘Breakthrough’ sessions will propose strategies for the development of AI applications and systems able to promote sustainable living, reduce poverty and deliver citizen-centric public services.

“Today, we’ve gathered here to discuss how far AI can go, how much it will improve our lives, and how we can all work together to make it a force for good,” said ITU Secretary-General Houlin Zhao. “This event will assist us in determining how the UN, ITU and other UN Agencies can work together with industry and the academic community to promote AI innovation and create a good environment for the development of artificial intelligence.”

“The AI for Good Global Summit has assembled an impressive, diverse ecosystem of thought leaders who recognize the opportunity to use AI to solve some of the world’s grandest challenges,” said Marcus Shingles, CEO of the XPRIZE Foundation. “We look forward to this Summit providing a unique opportunity for international dialogue and collaboration that will ideally start to pave the path forward for a new future of problem solvers working with XPRIZE and beyond.”

The AI for Good Global Summit will be broadcast globally as well as captioned to ensure accessibility.

You can view the live webcast at: http://bit.ly/AI-for-Good-Webcast.

For more information about this event, please visit: AI for Good Global Summit web page.

RoboCup video series: @Home league

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.

To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.

This week, we consider being part of the RoboCup@Home league. Robots helping at home can certainly ‘feel’ like the future. One day, these robots might help with various tasks around the house. You’ll hear about the history and ambitions of RoboCup from the trustees, and inspiring teams from around the world.

Short version:

Long version:

Want to watch the rest? You can view all the videos on the RoboCup playlist below:

https://www.youtube.com/playlist?list=PLEfaZULTeP_-bqFvCLBWnOvFAgkHTWbWC

Please spread the word! and if you would like to join a team, check here for more information.

 

If you liked reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

China’s strategic plan for a robotic future is working: 500+ Chinese robot companies

In 2015, after much research, I wrote about China having 194 robot companies and used screen shots of The Robot Report’s Global Map to show where they were and a chart to show their makeup. We’ve just concluded another research project and have added hundreds of new Chinese companies to the database and global map.

Why is China so focused on robots?

China installed 90,000 robots in 2016, 1/3 of the world’s total and a 30% increase over 2015. Why?

Simply said, China has three drivers helping them move toward country-wide adoption of robotics: scale, growth momentum, and money. Startup companies can achieve scale quickly because the domestic market is so large. Further, companies are under pressure to automate thereby causing double-digit demand for industrial robots (according to the International Federation of Robotics). Third, the government is strongly behind the move.

Made in China 2025 and 5-Year Plans

Chinese President Xi Jinping has called for “a robot revolution” and initiated the “Made in China 2025” program. More than 1,000 firms and a new robotics association, CRIA (Chinese Robotics Industry Alliance) have emerged (or begun to transition) into robotics to take advantage of the program, according to a 2016 report by the Ministry of Industry and Information Technology. By contrast, according to the same report, the sector was virtually non-existent a decade ago.

Under “Made in China 2025,” and the five-year robot plan launched last April, Beijing is focusing on automating key sectors of the economy including car manufacturing, electronics, home appliances, logistics, and food production. At the same time, the government wants to increase the share of in-country-produced robots to more than 50% by 2020; up from 31% last year.

Robot makers, and companies that automate, are both eligible for subsidies, low-interest loans, tax waivers, rent-free land and other incentives. One such program lured back Chinese engineers working overseas; another oversaw billions of dollars poured into technology parks dedicated to robotics production and related businesses; another encouraged local governments to help regional companies deploy robots in their production processes; and despite its ongoing crackdown on capital outflows, green lights have been given to Chinese companies acquiring Western robotics technology companies.

Many of those acquisitions were reported by The Robot Report during 2016 and are reflected (with little red flags) in the chart reporting the top 15 acquisitions of robotic-related companies:

  1. Midea, a Chinese consumer products manufacturer, acquired KUKA, one of the Big 4 global robot manufacturers
  2. The Kion Group, a predominately Chinese-funded warehousing systems and equipment conglomerate, acquired Dematic, a large European AGV and material handling systems company
  3. KraussMaffei, a big German industrial robots integrator, was acquired by ChemChina
  4. Paslin, a US-based industrial robot integrator, was acquired by Zhejiang Wanfeng Technology, a Chinese industrial robot integrator

China has set goals to be able to make 150,000 industrial robots in 2020; 260,000 in 2025; and 400,000 by 2030. If achieved, the plan should help generate $88 billion over the next decade. China’s stated goal in both their 5-year plan and Made in China 2025 program is to overtake Germany, Japan, and the United States in terms of manufacturing sophistication by 2049, the 100th anniversary of the founding of the People’s Republic of China. To make that happen, the government needs Chinese manufacturers to adopt robots by the millions. It also wants Chinese companies to start producing more of those robots.​

Analysts and Critics

Various research reports are predicting that more than 250,000 industrial pick and place, painting and welding robots will be purchased and deployed in China by 2019. That figure represents more than the total global sales of all types of industrial robots in 2014!

Research firms predicting dramatic growth for the domestic Chinese robotics industry are also predicting very low-cost devices. Their reports are contradicted by academics, roboticists and others who point out that there are so many new robot manufacturing companies in China that none will be able to manufacture many thousand robots per year and thus benefit from scale. Further, many of the components that comprise a robot are intricate and costly, e.g., speed reducers, servo motors and control panels. Consequently these are purchased from Nabtesco, Harmonic Drive, Sumitomo and other Japanese, German and US companies. Although a few of the startups are attempting to make reduction gears and other similar devices, the lack of these component manufacturers in China may put a cap on how low costs can go and on how much can be done in-country for the time being.

“We aim to increase the market share of homegrown servomotors, speed reducers and control panels in China to over 30 percent by 2018 or 2019,” said Qu Xianming, an expert with the National Manufacturing Strategy Advisory Committee, which advises the government on plans to upgrade the manufacturing sector. “By then, these indigenous components could be of high enough quality to be exported to foreign countries,” Qu said in an interview with China Daily. “Once the target is met, it will lay down a strong foundation for Chinese parts makers to expand their presence.”

Regardless, China, with governmental directives and incentives, has become both the world’s biggest buyer of robots and also is growing a very large in-country industry to make and sell robots of all types.

 

The Robot Report now has over 500 Chinese companies in its online directories and on its Global Map

The Robot Report and its research team have been able to identify over 500 companies that make or are directly involved in making robots in China. The CRIA (China Robot Industry Alliance), and other sources, proffer the number to be closer to 800. The Robot Report is limited by our own research capabilities, language translation limitations, and scarcity of information about robotics companies and their websites and contact people in China.

These companies are combined with other global companies – now totaling over 5,300 – in our online directories and plotted on our global map so that you can research by area. You can explore online and filter in a variety of ways.

Use Google’s directional and +/- markers to navigate, enlarge, and hone in on a geographical area of interest (or double click near where you want to enlarge). Click on one of the colored markers to get a pop-up window with the name, type, focus, location and a link to the company’s website.

[NOTE: the map shows a single entry for the company headquarters regardless how many branches, subsidiaries and factory locations that company might have, consequently international companies with factories and service centers in China won’t appear. Further note that The Robot Report’s database doesn’t contain companies that just use robots; it focuses on those involved in making robots.]

The Filter pull-down menu lets you choose any one of the seven major categories:

  1. Industrial robot makers
  2. Service robots used by corporations and governments
  3. Service robots for personal and private use
  4. Integrators
  5. Robotics-related start-up companies
  6. Universities and research labs with special emphasis on robotics
  7. Ancillary businesses providing engineering, software, components, sensors and other products and services to the industry.

In the chart below, 500 Chinese companies are tabulated by their business type and area of focus. Please note that your help would be greatly appreciated by adding to the map and making it as accurate and up-to-date as possible. Please send robotics-related companies that we have missed (or are new) to info@therobotreport.com.

 

Localization uncertainty-aware exploration planning

Autonomous exploration and reliable mapping of unknown environments corresponds to a major challenge for mobile robotic systems. For many important application domains, such as industrial inspection or search and rescue, this task is further challenged from the fact that such operations often have to take place in GPS-denied environments and possibly visually-degraded conditions.

Source: Dr Kostas Alexis, UNR

In this work, we move away from deterministic approaches on autonomous exploration and we propose a localization uncertainty-aware autonomous receding horizon exploration and mapping planner verified using aerial robots. This planner follows a two-step optimization paradigm. At first, in an online computed random tree the algorithm finds a finite-horizon branch that optimizes the amount of space expected to be explored. The first viewpoint configuration of this branch is selected, but the path towards it is decided through a second planning step. Within that, a new tree is sampled, admissible branches arriving at the reference viewpoint are found and the robot belief about its state and the tracked landmarks of the environment is propagated. The branch that minimizes the expected localization uncertainty is selected, the corresponding path is executed by the robot and the whole process is iteratively repeated.

The algorithm has been experimentally verified with aerial robotic platforms equipped with a stereo visual-inertial system operating in both well-lit and dark conditions, as shown in our videos:

To enable further developments, research collaboration and consistent comparison, we have released an open source version of our localization uncertainty-aware exploration and mapping planner, experimental datasets and interfaces. To get the code, please visit: https://github.com/unr-arl/rhem_planner

This research was conducted at the Autonomous Robots Lab of the University of Nevada, Reno.

Reference:

Christos Papachristos, Shehryar Khattak, Kostas Alexis, “Uncertainty-aware Receding Horizon Exploration and Mapping using Aerial Robots,” IEEE International Conference on Robotics and Automation (ICRA), May 29-June 3, 2017, Singapore
If you liked this article, you may also want to read:

Looking at new trends in Distributed Robotics Systems and Society

Figure 1: A distributed robotic system managing the logistics of a warehouse.

It isn’t a secret that distributed robotic systems are starting to revolutionize many applications from targeted material delivery (e.g., Amazon Robotics) to precision farming. Assisted by technological advancements such as cloud computing, novel hardware design, and manufacturing techniques, nowadays distributed robot systems are starting to become an important part of industrial activities including warehouse logistics or autonomous transportation.

However, as many engineers and scientists in this field know, several of the heterogeneous characteristics of these systems that make them ideal for certain future applications — robot autonomy, decentralized control, collective emergent behavior, collective learning, knowledge sharing, etc. — hinder the evolution of the technology from academic institutions to the public sphere. For instance, controlling the motion and behavior of large teams of robots still presents unique challenges for human operators, who cannot yet effectively convey their high-level intentions in application. Moreover, robots collaborating through the cloud might find difficulties applying shared knowledge due to physical hardware differences. Solutions to these issues might be necessary steps towards mainstream adoption.

Figure 2: Different types of robots share a blockchain communication channel using their public keys
as main identifiers.

In response to these challenges, new lines of research propose innovative synergies to tackle the current problems existing in the field. For instance, the inclusion of wearable and gaming technologies to reduce the complexity of controlling a robotic swarm by human operators or, using blockchain-based models to create new consensus and business models for large teams of robots.

In order to understand the current state of the art of the distributed robotic systems field and foresee its breakthroughs, the International Journal of Advanced Robotic Systems decided to launch a special issue titled “Distributed Robotic Systems and Society”. This special issue seeks to move beyond the classical view of distributed robotic systems to advance our understanding about the future role of these systems in the marketplace and public society. Insights to unasked questions in the field are especially suited to this issue. For instance, what security methods are available and are efficient for these systems? What kind of distributed robotic algorithms are suitable to introduce human-oriented interactions? Are there new interfaces to connect with these systems or reduce their complexity? Are distributed networks such as Bitcoin a feasible way to integrate distributed robotic systems in our society? Are there new business models for distributed robot ventures? How can distributed robotic systems make use of unlimited access information in the cloud?

We also welcome submissions on other topics addressing multi-robot systems in the society. We seek papers with conceptual and theoretical contributions as well as papers documenting valuable results of experiments conducted with real-robots. Finally, the editorial team of this special issue (Dr. Penaloza, Dr. Hauert, and myself) would like to encourage researchers and scientists to submit their manuscripts. We are confident that the ideas, methods, and results included in this special issue will assist the scientific community as well as the industry to reach new horizons in the field of distributed robotics systems.

The Drone Center’s Weekly Roundup: 6/5/17

A German Heron 1 UAV. Credit: Airbus Defence and Space

May 29, 2017 – June 4, 2017

News

A German court has thrown out a protest filed by U.S. drone maker General Atomics Aeronautical Systems over the German military’s decision to acquire the Israel Aerospace Industries Heron TP drone over the U.S. firm’s Reaper. The decision by the Oberlandesgericht, the country’s high court, allows the Bundeswehr to proceed with the planned acquisition of five Heron TPs, a medium-altitude long-endurance surveillance and reconnaissance drone. (DefenseNews)

Commentary, Analysis, and Art

At War on the Rocks, Jonathan Gillis argues that the U.S. military is not prepared for a future filled with enemy drones.

Also at War on the Rocks, Michael Horowitz considers the ways in which emerging technologies will shape how the U.S. military fights in future conflicts.

In a series of articles at Breaking Defense, Sydney J. Freedberg Jr. looks at how the U.S. military is integrating artificial intelligence into its operations.  

At Defense One, Patrick Tucker writes that Poland is planning to invest in small, lethal drones rather than large unmanned systems like the Reaper.

A study by the University of Washington found delivery drones tend to produce fewer carbon emissions than trucks when traveling short distances. (UW Today)

At TechCrunch, Brian Heater argues that the DJI Spark foldable drone is not quite the “mainstream” drone that it was made out to be.

At Popular Mechanics, Joe Pappalardo considers the role that small, disposable drones will play in the future of warfare.

At ArsTechnica, Sean Gallagher recalls his role in the U.S. Navy’s early experiments with unmanned aircraft.

At the Montreal Gazette, Marc Garneau discusses the potential threat posed by drones to aircraft.

At the Washington Post, Michael Laris considers what is likely to happen in the wake of the federal appeals court’s decision to strike down the FAA’s drone registration rule for hobbyists.

At the Jamestown Foundation, Tobias J. Burgers and Scott N. Romaniuk look at how al-Qaeda learned to adapt to U.S. drone strikes.

The New York Times reports that a former head of the CIA’s drone program will now lead the agency’s Iran operations.

A paper in Remote Sensing offers a survey of the different systems and methods for using drones for marine mammal research. (MDPI)

A paper in Frontiers in Plant Science compares the use of drones and satellite images in monitoring plant invasions.
In Critical Studies in Security, Katharine Hall Kindervater argues that targeted killings are best understood within the context of a shift towards lethal surveillance.

In the Air & Space Power Journal, Lt. Col. Thomas S. Palmer and Dr. John P. Geis II argue that effective counter-drone weapons will be essential in future conflicts.

In the Naval War College Review, Jeffrey E. Kline considers how to effectively integrate robotics into the fleet while recognizing fiscal constraints.

Know Your Drone

Researchers at the Charles Stark Draper Laboratory and Howard Hughes Medical Institute have created a system that turns live dragonflies into steerable drones. (Gizmodo)

A team at the University of Sherbrooke has developed a solar-powered drone than can autonomously land on lakes to recharge its batteries. (IEEE Spectrum)

U.S. firm Drone Aviation Holding Corp. unveiled an automated winch tethering system for DJI Inspire commercial multirotor drones. (Unmanned Systems Technology)

Amazon has been awarded a patent for a shipping label that doubles as a parachute for items delivered by drone. (GeekWire)

Meanwhile, Walmart has been awarded a patent for a system that uses blockchain technology to keep track of delivery drones. (CoinDesk)

The Office of Naval Research is developing a drone that can detect buried mines for use in amphibious landings. (Shephard Media)

Apple announced that its education programming App, Swift Playgrounds, will soon support code-writing for robots and drones. (The Verge)  

Belorussian firms presented a range of new military unmanned aircraft at the MILEX 2017 exhibition in Minsk, including the Belar YS-EX, a medium-altitude long-endurance system. (IHS Jane’s 360)

In a test flight, China’s Caihong solar-powered ultra long-endurance drone reached an altitude of 65,000 feet. (The Sun)

Defense firm Lockheed Martin successfully completed a beyond line of sight pipeline inspection operation with its Indago 2 commercial multirotor drone. (Unmanned Aerial Online)

Robot maker SMP Robotics unveiled the S5 Security Robot, an unmanned ground vehicle. (Unmanned Systems Technology)

In a test, the U.S. Army used two Raytheon Stinger anti-aircraft missiles to intercept two drones. (Press Release)

The U.S. Special Operations Command has completed testing for its Joint Threat Warning System sensor for the Puma hand-launched tactical drone. (Shephard Media)

Aerospace firm Russian Helicopters is developing a fixed-wing vertical take-off and landing drone. (Shephard Media)

The U.S. Army is planning to test its autonomous trucks on a public highway in Michigan later this month. (Voice of America)

Texas Instruments has designed two circuit-based subsystems that it claims could increase the efficiency of battery-powered drones. (Drone Life)

Drones at Work

In a proof of concept demonstration, Drone Dispatch delivered a box of donuts to a customer by drone in Colorado. (CNET)

The organizers of the Torbay Airshow in the U.K. banned the use of drones at the event. (Devon Live)

Police in Snellville, Georgia are investigating various reports of a drone being used to spy on residents in their homes. (WSBTV)

Officers from the Stafford County Sheriff’s Office in Virginia used a drone to find an armed suspect. (WTOP)

The Albany County Sheriff’s Office in New York has acquired a drone for a range of operations. (Albany Times Union)

North Dakota’s governor has established a task force to support the development of counter-drone technologies. (Press Release)

The Israel Defense Forces is equipping infantry, border defense, and combat intelligence corps units with DJI drones. (Times of Israel)

A Russian bank plans to begin using drones to deliver cash to customers. (Forbes)

Industry Intel

Speaking at the Code Conference, Intel CEO Brian Krzanich said that Intel will not develop a consumer drone. (Recode)

Snap, the social media company that owns Snapchat, has acquired Ctrl Me Robotics, a California-based drone startup. (Buzzfeed)

The Australian Ministry of Defense announced that it will invest $75 million in small unmanned aircraft systems, including the AeroVironment Wasp AE, for the Australian Army. (Press Release)

The Netherlands Ministry of Defence selected the Insitu Integrator to replace the Insitu ScanEagle. (Unmanned Systems Technology)

The U.S. Army awarded Six3 Advanced Systems a $10.5 million contract to design and develop a prototype for a squad of human and unmanned assets. (DoD)

Three German firms, ESG Elektroniksystem und Logistik, Diehl Defence and Rohde & Schwarz, have partnered to market the Guardion counter-drone solution. (Shephard Media)

2G Robotics will provide the laser scanning, imaging, and illumination systems for the Norwegian Navy’s Kongsberg Maritime Hugin autonomous underwater vehicles. (Shephard Media)

France’s Direction Générale de l’Armement will take delivery of the Thales Spy’Ranger mini drone beginning in late 2018. (IHS Jane’s Defence Weekly)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Robohub Digest 05/17: RoboCup turns 20, ICRA in Singapore, robot inspector helps check bridges

A quick, hassle-free way to stay on top of robotics news, our robotics digest is released on the first Monday of every month. Sign up to get it in your inbox.

20 years of RoboCup

20 years in the books! RoboCup, which first started in 1997, was originally established to bring forth a team of robots that could beat the human soccer World Cup champions. Twenty years on, RoboCup is so more than just a soccer competition. In fact, the competition has grown into an international movement with a variety of leagues. Teams compete against each other in four different leagues and many sub-competitions, including home, work, and rescue missions. The complexity of missions in RoboCup requires intelligent, dynamic, sensing robots that can react to chaotic and changing environments. And in its 20-year history, the competition has brought forth numerous winners who have gone on to achieve great things.

Without robot competitions like RoboCup, the field of robotics wouldn’t be where it is today. So to celebrate 20 years of RoboCup, the Federation launched a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan. You can watch all videos from the playlist here.

ICRA 2017 in Singapore

While we have RoboCup 2017 to look forward to, the IEEE 2017 International Conference on Robotics and Automation (ICRA) took place in Singapore. Under the conference theme “Innovation, Entrepreneurship, and Real-world Solutions”, the event brought together engineers, researchers, entrepreneurs and industry to address some of the major challenges of our times.

This year’s keynote speakers included Louis Phee, who described development and implementation of a surgical robot called EndoMaster, and Peter Luh, giving an overview of Industry 4.0. Alongside the speeches, the event included 25 workshops and tutorials, various exhibitions, as well as four Robot Challenges.

Self-driving cars: Competition heats up

May wasn’t just events and competitions. In the world of autonomous vehicles, French Groupe PSA, the second largest car manufacturer in Europe which owns brands such as Peugeot and Citroen, have teamed up with autonomous car maker nuTonomy. The collaboration will seek to build a self-driving Peugeot 3008, which they hope will hit the roads of Singapore in the not-too-distant future.

Groupe PSA and other well-known car manufacturers, including Ford Motors, who are just starting their bid to enter the autonomous car market, are lagging way behind Waymo (the self-driving car developed by Google’s parent Alphabet) when it comes to putting their cars on actual roads. And with Waymo about to team up with ride-hailing start-up Lyft, Google is getting ever closer to making their autonomous car part of mainstream traffic.

Another player in the self-driving car game we haven’t mentioned yet is cab service Uber. The company is locked in a dispute with Google over allegedly stolen design secrets and will likely be going to public trial later in the year. Linked to the lawsuit, Uber fired the engineer at the heart of the dispute, Anthony Levandowski, who is suspected of having stolen company secrets when he left Google and founded his own company, Otto, which was acquired by Uber last year. The information war continues.

Self-driving cars: Innovation

Meanwhile, a group of veterans previously linked with Alphabet founded their own company – DeepMap Inc. – which aims to develop systems that allow cars to navigate complex cityscapes.

It’s not just cityscapes that are complex to navigate. Considering other vehicles, cyclists, and pedestrians on busy roads form part of the difficult environment a self-driving car has to cope with due to its unpredictable nature. It will, therefore, become necessary for autonomous cars to understand and predict behaviour. Here, machine learning will be key, as Dr Nathan Griffiths explains in The Royal Society’s blog on machine learning in research.

And with so much interest and research into autonomous, intelligent cars, it’s not surprising that some, like Chris Urmson in a recent lecture at CMU, predict we will see a shift from the traditional transport model where people own their cars to a more dynamic, responsive model of “Transportation as a Service”, in which, companies own fleets of cars that can be used by anyone when and where they’re needed.

Robots in the fields

Much of the innovations that have enabled autonomous cars are transferable to industrial, commercial and agricultural vehicles. In the case of the latter, self-driving tractors and precision agribots have already increased productivity and made 24-hour autonomous, high-yield farming a possibility.

In Salinas Valley, California, robots are already used to pick lettuce, and to help vineyard owners decide when they need to water their plants. And GV (formerly Google Ventures) just invested $10 million into Abundant Robotics to build a picking robot, initially to pick apples but with potential for adaptation to support the harvest of other fruit.

It is believed the uptake of farming technologies is not progressing as quickly as it could be, due to farmers being slow to accept precision agriculture products, software, equipment and practices. But the farming (r)evolution is well underway, as agricultural robotics are already a 3 billion dollar industry set to grow to an impressive 12 billion by 2026.

Robots in the skies

While robots are still waiting to be fully accepted in agriculture, it’s no secret that the US military uses drones extensively. What came as a surprise was the strange video feed that surfaced in May from what appeared to be a drone flying over Florida’s panhandle, apparently sponsored by the National Reconnaissance Office – a body that doesn’t usually publicise its drone-related activities. The footage, now believed to have been shot in February, was likely uploaded as a demo video by a contractor.

While the Florida video caused quite a bit of confusion this month, another drone was making headlines for very different reasons. Previously used in military operations, the ArcticShark (a modified version of the military TigerShark) drone is now helping to fight climate change. And it’s now on a scientific mission to help scientists understand cloud formation and other atmospheric processes.

In other drone-related news, the Alaska Department of Transportation and Public Facilities has allowed some of its employees to receive licenses to operate drones to support projects involving roads, bridged and other structures. And a report has shown that drone funding fell by 64% in 2016 compared to 2015, with DJI dominating the market and issues such as battery life, connectivity problems and drone regulations stifling development efforts.

Robots in the water

From the skies to the sea: this May, NATO Nations agreed to use JANUS for their digital underwater communications. JANUS has the potential to make military and civilian, NATO and non-NATO, devices fully interoperable, doing away with the communication problems between systems and models by different manufacturers that have made underwater communication difficult up to this point.

Robots in the lab

Researchers at the Institute for Human and Machine Cognition in Pensacola, Florida, have developed a two-legged robot that can run without using sensors and a computer. The robot, called Planar Elliptical Runner, is stable through its physical design, which makes it different to other two or four-legged robots.

Meanwhile, a team at the University of California, Berkeley, has come up with a nimble-fingered robot that is able to pick up a wide range of objects using a 3D sensor and a deep learning neural network. It may not be perfect, but it’s the most nimble-fingered robot yet and the technology may find applications in picking and manufacturing in future.

Human-Machine Interaction

Most of the robots we interact with are practical helpers, offering support in the home, on the road or at work. Innovations some of us may have interacted with are autonomous lawn mowers, cars with self-driving features, or Amazon’s Alexa. With there plenty more robots in the pipeline.

A four-wheeled, waterproof, battery-powered robot inspector developed by a team in Nevada, may soon be supporting civil engineers and safety inspectors with bridge checks to reduce the chance of human error and omissions that could lead to a collapse such as the I-35W bridge disaster in Minnesota on 2007.

And finally, engineers in Germany have built a robot priest called BlessU-2 that can beam light from its hands and deliver blessings in five languages. The robot is meant to spark discussions about faith, the church and the potential of AI.

Learning resources: Robot Academy

And to finish off our digest for May, we wanted to highlight a new open online resource for robotics education: the Robot Academy. Developed by Professor Peter Corke and the Queensland University of Technology (QUT), the Academy offers more than 200 lessons from robot joint control architecture to limits of electric motor performance. So if you find yourself with a bit of time on your hands, why not try a Robot Academy Masterclass?

Or if you’re after something a little bit different, there’s a new toolkit on “Computational Abstractions for Interactive Design of Robotic Devices” that allows you to drag and drop parts of a virtual robot on screen without the need to know exactly what needs to connect to what as a complex physics engine ensures your robot won’t fail or fall over.

Missed any previous Digests? Click here to check them out.

Upcoming events for June – July 2017

Intelligent Ground Vehicle Competition: June 2-5, Rochester, MI.

CES Asia: June 7-9, Shanghai, China.

Unmanned Cargo Ground Vehicle Conference: June 13-14, Maaspoort, Venlo, The Netherlands.

Autonomous machines world: June 26-27, Berlin, Germany.

RoboUniverse: June 28-30, Seoul.

CIROS: July 5, 2017 – July 8, 2017, Shanghai, China. 

ASCEND Conference and Expo: July 19, 2017 – July 21, 2017, Portland, OR 

RoboCup: July 25, 2017 – July 31, 2017, Nagoya, Japan 

Call for papers:

1st Annual Conference on Robot Learning (CoRL 2017): Call for papers deadline 28 June.

Giving robots a sense of touch

A GelSight sensor attached to a robot’s gripper enables the robot to determine precisely where it has grasped a small screwdriver, removing it from and inserting it back into a slot, even when the gripper screens the screwdriver from the robot’s camera. Photo: Robot Locomotion Group at MIT

Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface.

Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.

In one paper, Adelson’s group uses the data from the GelSight sensor to enable a robot to judge the hardness of surfaces it touches — a crucial ability if household robots are to handle everyday objects.

In the other, Russ Tedrake’s Robot Locomotion Group at CSAIL uses GelSight sensors to enable a robot to manipulate smaller objects than was previously possible.

The GelSight sensor is, in some ways, a low-tech solution to a difficult problem. It consists of a block of transparent rubber — the “gel” of its name — one face of which is coated with metallic paint. When the paint-coated face is pressed against an object, it conforms to the object’s shape.

The metallic paint makes the object’s surface reflective, so its geometry becomes much easier for computer vision algorithms to infer. Mounted on the sensor opposite the paint-coated face of the rubber block are three colored lights and a single camera.

“[The system] has colored lights at different angles, and then it has this reflective material, and by looking at the colors, the computer … can figure out the 3-D shape of what that thing is,” explains Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences.

In both sets of experiments, a GelSight sensor was mounted on one side of a robotic gripper, a device somewhat like the head of a pincer, but with flat gripping surfaces rather than pointed tips.

Contact points

For an autonomous robot, gauging objects’ softness or hardness is essential to deciding not only where and how hard to grasp them but how they will behave when moved, stacked, or laid on different surfaces. Tactile sensing could also aid robots in distinguishing objects that look similar.

In previous work, robots have attempted to assess objects’ hardness by laying them on a flat surface and gently poking them to see how much they give. But this is not the chief way in which humans gauge hardness. Rather, our judgments seem to be based on the degree to which the contact area between the object and our fingers changes as we press on it. Softer objects tend to flatten more, increasing the contact area.

The MIT researchers adopted the same approach. Wenzhen Yuan, a graduate student in mechanical engineering and first author on the paper from Adelson’s group, used confectionary molds to create 400 groups of silicone objects, with 16 objects per group. In each group, the objects had the same shapes but different degrees of hardness, which Yuan measured using a standard industrial scale.

Then she pressed a GelSight sensor against each object manually and recorded how the contact pattern changed over time, essentially producing a short movie for each object. To both standardize the data format and keep the size of the data manageable, she extracted five frames from each movie, evenly spaced in time, which described the deformation of the object that was pressed.

Finally, she fed the data to a , which automatically looked for correlations between changes in contact patterns and hardness measurements. The resulting system takes frames of video as inputs and produces hardness scores with very high accuracy. Yuan also conducted a series of informal experiments in which human subjects palpated fruits and vegetables and ranked them according to hardness. In every instance, the GelSight-equipped robot arrived at the same rankings.

Yuan is joined on the paper by her two thesis advisors, Adelson and Mandayam Srinivasan, a senior research scientist in the Department of Mechanical Engineering; Chenzhuo Zhu, an undergraduate from Tsinghua University who visited Adelson’s group last summer; and Andrew Owens, who did his PhD in electrical engineering and computer science at MIT and is now a postdoc at the University of California at Berkeley.

Obstructed views

The paper from the Robot Locomotion Group was born of the group’s experience with the Defense Advanced Research Projects Agency’s Robotics Challenge (DRC), in which academic and industry teams competed to develop control systems that would guide a humanoid robot through a series of tasks related to a hypothetical emergency.

Typically, an autonomous robot will use some kind of computer vision system to guide its manipulation of objects in its environment. Such systems can provide very reliable information about an object’s location — until the robot picks the object up. Especially if the object is small, much of it will be occluded by the robot’s gripper, making location estimation much harder. Thus, at exactly the point at which the robot needs to know the object’s location precisely, its estimate becomes unreliable. This was the problem the MIT team faced during the DRC, when their robot had to pick up and turn on a power drill.

“You can see in our video for the DRC that we spend two or three minutes turning on the drill,” says Greg Izatt, a graduate student in electrical engineering and computer science and first author on the new paper. “It would be so much nicer if we had a live-updating, accurate estimate of where that drill was and where our hands were relative to it.”

That’s why the Robot Locomotion Group turned to GelSight. Izatt and his co-authors — Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering; Adelson; and Geronimo Mirano, another graduate student in Tedrake’s group — designed control algorithms that use a computer vision system to guide the robot’s gripper toward a tool and then turn location estimation over to a GelSight sensor once the robot has the tool in hand.

In general, the challenge with such an approach is reconciling the data produced by a vision system with data produced by a tactile sensor. But GelSight is itself camera-based, so its data output is much easier to integrate with visual data than the data from other tactile sensors.

In Izatt’s experiments, a robot with a GelSight-equipped gripper had to grasp a small screwdriver, remove it from a holster, and return it. Of course, the data from the GelSight sensor don’t describe the whole screwdriver, just a small patch of it. But Izatt found that, as long as the vision system’s estimate of the screwdriver’s initial position was accurate to within a few centimeters, his algorithms could deduce which part of the screwdriver the GelSight sensor was touching and thus determine the screwdriver’s position in the robot’s hand.

“I think that the GelSight technology, as well as other high-bandwidth tactile sensors, will make a big impact in robotics,” says Sergey Levine, an assistant professor of electrical engineering and computer science at the University of California at Berkeley. “For humans, our sense of touch is one of the key enabling factors for our amazing manual dexterity. Current robots lack this type of dexterity and are limited in their ability to react to surface features when manipulating objects. If you imagine fumbling for a light switch in the dark, extracting an object from your pocket, or any of the other numerous things that you can do without even thinking — these all rely on touch sensing.”

“Software is finally catching up with the capabilities of our sensors,” Levine adds. “Machine learning algorithms inspired by innovations in deep learning and computer vision can process the rich sensory data from sensors such as the GelSight to deduce object properties. In the future, we will see these kinds of learning methods incorporated into end-to-end trained manipulation skills, which will make our robots more dexterous and capable, and maybe help us understand something about our own sense of touch and motor control.”

VENTURER driverless car project publishes results of first trials

VENTURER is the first Connected and Autonomous Vehicle project to start in the UK. The results of VENTURER preliminary trials show that the handover process is a safety critical issue in the development of Autonomous Vehicles (AVs).

The first VENTURER trials set out to investigate ‘takeover’ (time taken to reengage with vehicle controls) and ‘handover’ (time taken to regain a baseline/normal level of driving behaviour and performance) when switching frequently between automated and manual driving modes within urban and extra-urban settings. This trial is believed to be the first to directly compare handover to human driver-control from autonomous mode in both simulator and autonomous road vehicle platforms.

The handover process is important from a legal and insurance perspective – the length of time it takes people to regain full control of the vehicle represents a meaningful risk to insurers and understanding when control is transferred between the vehicle and the driver has liability implications.

David Williams from AXA outlined that, “The results of this trial have been very useful as we consider the issues that the handover process raises for insurers. Although some motor manufacturers have said they will skip SAE Level 3, some are progressing with vehicles that will require the driver to take back control of the vehicle. The insurance industry will need to assess the relative safety of the handover systems as they come to market but VENTURER’s trial 1 results show that with robust testing we can properly assess how humans and autonomous vehicles interact during this crucial phase of the technologies’ evolution.”

VENTURER designed, tested and analysed both simulator and road vehicle-based handover trials.

50 participants were tested in a simulator and/or in the autonomous vehicle on roads on UWE Bristol campus. The tests were at speeds of 20, 30, 40 and 50 mph in the simulator and 20 mph in the autonomous vehicle; speeds common in urban and extra-urban settings. Baseline driving behaviour of participants was also tested, and then the length of time it took them to return to this baseline following handover.

During the trial, the driver was aware that they might be alerted to take control of the vehicle at any moment, either due to the decisions made by the driver, or the capabilities of the vehicle in particular situations. VENTURER has classified this as planned handover.

The 20- and 30- mph scenarios involved town/city urban driving and the 40- and 50- mph scenarios involved outer-town/city extra-urban driving. Driving speed, lateral lane position, and braking behaviour (amongst other measures) were taken.

A key finding is that it took 2-3 seconds for participants to ‘takeover’ manual controls and resume active driving after short periods of autonomous driving in urban environments.

They also found that participants drove more slowly than the recommended speed limit for up to 55 seconds following a handover request, which suggests more cautious, but not necessarily safer, driving. This could be important for traffic management – if drivers on the road replicated this behaviour it might impact the flow of traffic and mitigate some of the predicated benefits of AVs.

Image: Local World

In addition, participants returned to their baseline manual driving behaviour after handover within 10-20 seconds, with most measures including speed, stabilising after 20-30 seconds. This was not the case within the highest speed simulator condition where stabilisation did not seem to occur on most measures within the 55 second measured handover period.

The team says these results have implications for the designers of autonomous vehicles with handback functionality, for example, in terms of phased handover systems. The results also inform the emerging market for insurance for autonomous vehicles.

Chair of the project, Lee Woodcock (Atkins) said, “The outcome of this research for trial one is significant and must provide food for thought as the market develops for driverless cars and how we progress through the different levels of automation. Further research must also explore interaction not just between vehicles but also with network operations and city management systems.”

Dr Phillip Morgan (UWE Bristol) said, “Designers need to proceed with caution and consider human performance under multiple driving conditions and scenarios in order to plot accurate takeover and handover time safety curves. In the time it takes for drivers to reach their baseline behaviour, the vehicle may have travelled some distance, depending on the speed. These initial trials show that there are some risk elements in the handover process and bigger studies with more participants may be needed to ensure there is sufficient data to build safe handover support systems.”

Professor Graham Parkhurst (UWE Bristol) said, “The results of these tests suggest that autonomous vehicles on highways should slow to a safe speed before handover is attempted. Further research is required to clarify what that safe speed is, but it would be substantially slower than the 70 mph motorway limit, and somewhat lower than the highest speed (50 mph) considered in our simulator trials.”

The trial clearly demonstrated that there were no major differences between control of the simulator and the Wildcat platforms used within the trial, validating the future use of simulators for the development of autonomous vehicles and associated technologies.

Click here to view the full results and papers.

Two stars, different fates

Levandowski (right) at MCE 2016. Source: Wikipedia Commons

Andy Rubin, who developed the Android operating system at Google then went on to lead Google through multiple acquisitions into robotics, has launched a new consumer products company. Anthony Levandowski, who, after many years with Google and their autonomous driving project, launched Otto which Uber acquired, was sued by Google, and just got fired by Uber.

People involved in robotics – from the multi-disciplined scientists turned entrepreneurs to all the specialists and other engineers involved in any aspect of the industry of making things robotic – are a relatively small community. Most people know (or know of) most of the others, and get-togethers like ICRA, the IEEE International Conference on Robotics and Automation, being held this week in Singapore, are an opportunity to meet new up-and-coming talent as they present their papers and product ideas and mingle with older, more established players. Two of those players made headline news this week: Rubin, launching Essential, and Levandowski, getting fired.

Andy Rubin

Rubin came to Google in 2005 when they acquired Android and left in 2014 to found an incubator for hardware startups, Playground Global. While at Google Rubin became an SVP of Mobile and Digital content including the open-source smartphone operating system Android and then started Google’s robotics group through a series of acquisitions. Android can be found in more than 2 billion phones, TVs, cars and watches.

2008 Google Developer Day in Japan – Press Conference: Andy Rubin

In 2007, Rubin was developing his own version of a smartphone at Google, also named Android, when Apple launched their iPhone, a much more capable and stylish device. Google’s product was scrapped but their software was marketed to HTC and their phone became Google’s first Android-based phone. The software was similar enough to Apple’s that Steve Jobs was furious and, as reported in Fred Vogelstein’s ‘Dogfight: How Apple and Google Went to War and Started a Revolution,’ called Rubin a “big, arrogant f–k” and “everything [he’s doing] is a f–king rip-off of what we’re doing.”

Jobs had trusted Google’s cofounders, Larry Page and Sergey Brin and Google’s CEO Eric Schmidt who was on Apple’s board. All three had been telling Jobs about Android, but they kept telling him it would be different from the iPhone. He believed them until he actually saw the phone and its software and how similar it was to the iPhone’s, whereupon he insisted Google make a lot of changes and removed Schmidt from Apple’s board. Rubin was miffed and had a sign on his office white board that said “STEVE JOBS STOLE MY LUNCH MONEY.”

Quietly, stealthily, Rubin went about creating “a new kind of company using 21st-century methods to build products for the way people want to live in the 21st century.” That company is Essential and Essential just launched and is taking orders for its new $699 phone and a still-stealthy home assistant to compete with Amazon’s Echo and Google’s Home devices.

Wired calls the new Essential Phone “the anti-iPhone.” The first Phones will ship in June.

Anthony Levandowski

In 2004, Levandowski and a team from UC Berkeley built and entered an autonomous motorcycle into the DARPA Grand Challenge. In 2007 he joined Google to work with Sebastian Thrun on Google Street View. Outside of Google he started a mobile mapping company that experimented with LiDAR technology and another to build a self-driving LiDAR-using a Prius. Google acquired both companies including their IP.

In 2016 Levandowski left Google to found Otto, a company making self-driving kits to retrofit semi-trailer trucks. Just as the kit was launched, Uber acquired Otto and Levandowski became the head of Uber’s driverless car operation in addition to continuing his work at Otto.

Quoting Wikipedia,

According to a February 2017 lawsuit filed by Waymo, the autonomous vehicle research subsidiary of Alphabet Inc, Levandowski allegedly “downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation” before resigning to found Otto.

In March 2017, United States District Judge William Haskell Alsup, referred the case to federal prosecutors after Levandowski exercised his Fifth Amendment right against self-incrimination. In May 2017, Judge Alsup ordered Levandowski to refrain from working on Otto’s LiDAR and required Uber to disclose its discussions on the technology. Levandowski was later fired by Uber for failing to cooperate in an internal investigation.

Page 426 of 429
1 424 425 426 427 428 429