Here are the slides I gave recently as member of panel Sci-Fi Dreams: How visions of the future are shaping the development of intelligent technology, at the Centre for the Future of Intelligence 2017 conference. I presented three short stories about robot stories.
Slide 2:
The FP7 TRUCE Project invited a number of scientists – mostly within the field of Artificial Life – to suggest ideas for short stories. Those stories were then sent to a panel of writers, who chose one of the stories. I submitted an idea called The feeling of what it is like to be a robot and was delighted when Lucy Caldwell contacted me. Following a visit to the lab Lucy drafted a beautiful story called The Familiar which – following some iteration – appeared in the collected volume Beta Life.
Slide 3:
More recently the EU Human Brain Project Foresight Lab brought three Sci Fi writers – Allen Ashley, Jule Owen and Stephen Oram – to visit the lab. Inspired by what they saw they then wrote three wonderful short stories, which were read at the 2016 Bristol Literature Festival. The readings were followed by a panel discussion which included myself and BRL colleagues Antonia Tzemanaki and Marta Palau Franco. The three stories are published in the volume Versions of the Future. Stephen Oram went on to publish a collection called Eating Robots.
Slide 4:
My first two stories were about people telling stories about robots. Now I turn to the possibility of robots themselves telling stories. Some years ago I speculated on the idea on the idea of robots telling each other stories (directly inspired by a conversation with Richard Gregory). That idea has now turned into a current project, with the aim of building an embodied computational model of storytelling. For a full description see this paper, currently in press.
A quick, hassle-free way to stay on top of robotics news, our robotics digest is released on the first Monday of every month. Sign up to get it in your inbox.
Robots, drones and AI in action
Let’s kick off our June review by looking at some great new robotics research and development in action: Inspired by arthropod insects and spiders, Harvard Professor George Whitesides and Alex Nemiroski—a former postdoctoral fellow in Whitesides’ Harvard lab—have created a type of semi-soft robot capable of standing and walking. The team also created a robotic water strider capable of pushing itself along the liquid surface. The robots are described in a recently published paper in the journal Soft Robotics.
And in news from the garden shed, Franklin Robotics has launched a Kickstarter campaign for Tertill, their solar-powered, garden-weeding robot. Tertill lives in your garden, collecting sunlight to power its weed patrol, and cutting down short plants with a string trimmer/weed whacker with almost no intervention required. Available for about $300USD, the fully autonomous Tertill is the first weeding robot available to home gardeners! Check out the video below.
You may have heard that humankind lost another important battle with artificial intelligence last month when AlphaGo beat the world’s leading Go player Ke Jie by three games to zero. AlphaGo is an AI program developed by DeepMind, part of Google’s parent company Alphabet. Last year it beat another leading player, Lee Se-dol, by four games to one, but since then AlphaGo has substantially improved. Ke Jie described AlphaGo’s skill as “like a God of Go”. But AlphaGo will now retire from playing Go, leaving behind a legacy of games played against itself. These games have been described by one Go expert as like “games from far in the future”, which humans will study for years to improve their own play.
Elsewhere, Chinese education authorities have gone high-tech to catch cheaters as millions of high-school students take their “gaokao”, the annual university entrance exam seen as key to landing a lucrative white-collar job. So high are the stakes and so competitive is the exam that some students resort to cheating. But not if these facial recognition drones can help it.
Meanwhile, back in the pub, students from the University of Leeds have created a robot which they claim is capable of pulling the perfect pint. The team, from the School of Mechanical Engineering, worked with local engineering company Quality Bearings, and Saltaire Brewery to come up with the concept. The robot was put through its paceswith a taste test, a consistency test and a wastage test. Check out the clip below.
At ICRA 2017, researchers from the Japan Aerospace Exploration Agency (JAXA) introduced a small robotic explorer that uses a single solid-fuel rocket to launch itself into the air. What’s new is that their robot includes some braking rockets that help it make pinpoint landings, as well as a clever gyroscopic system to make sure that it flies straight as well as providing a way for the robot to get around after landing.
In other flying news, a team of MIT engineers has come up with a much less expensive UAV design that can hover for longer durations to provide wide-ranging communications support. The researchers designed, built, and tested a UAV resembling a thin glider with a 24-foot wingspan. The vehicle can carry 4.5 to 9 kg of communications equipment while flying at an altitude of 15,000 feet. Weighing in at just under 68 kg, the vehicle is powered by a 5-horsepower gasoline engine and can keep itself aloft for more than five days — longer than any gasoline-powered autonomous aircraft has remained in flight, the researchers say. Check out the Jungle Hawk Owl’s maiden flight below.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are aiming to develop flying robots that can both drive through a city-like setting with parking spots, no fly zones and landing pads. In a new paper, the team presented a system of eight quadcopter drones that can do all of that and more.
And last but not least: Authors of the ICRA 2017 Best Automation Paper “UAV-Based Crop and Weed Classification for Smart Farming” wrote about their findings for Robohub. Check it out.
Policy and financing
Europe needs a “human in command approach,” says the European Economic and Social Comitte (EESC). The EU must pursue a policy that ensures the development, deployment and use of artificial intelligence in Europe in favor, and not conducive to the detriment, of society and social welfare, the Committee said in an initiative opinion on the social impact of AI which 11 fields are identified for action.
Across the pond, some of the nation’s leading wireless giants and drone makers offered “effusive praise” of President Donald Trump in June as they lobbied his administration to eliminate the federal regulations that stand in the way of their businesses. As part of the White House’s five-day focus on technology, Trump gathered executives from those industries—including AT&T CEO Randall Stephenson, PrecisionHawk CEO Michael Chasen and a number of venture capitalists—for a morning of brainstorming sessions devoted to spurring new investments in emerging fields.
And in a long-awaited business transaction, The New York Times Dealbook announced that SoftBank was buying Boston Dynamics from Alphabet (Google). Also included in the deal is the Japanese startup Schaft. Acquisition details were not disclosed. Both Boston Dynamics and Schaft were acquired by Google when Andy Rubin was developing Google’s robot group through a series of acquisitions. Both companies have continued to develop innovative mobile robots. And both have been on Google’s “for sale” list.
The Drone Racing League (DRL) has announced the closing of a $20 million Series B round of financing, bringing the total amount raised to $32 million. The new round of financing was led by Sky, Liberty Media LMCA +% (owner of Formula 1) and Lux Capital, with involvement by a couple of new investors in Allianz and World Wrestling Entertainment (WWE). Investment from Allianz was expected after it was announced that Allianz signed on to become title sponsor of DRL’s elite racing circuit in February.
Health and medicine
Researchers from MIT’s CSAIL have developed a new system that uses a 3-D camera, a belt with separately controllable vibrational motors distributed around it, and an electronically reconfigurable Braille interface to give visually impaired users more information about their environments.
Elsewhere, a researcher at the University of the West of England (UWE Bristol) is developing a bio-inspired ‘smart’ knee joint for prosthetic lower limbs. Dr Appolinaire Etoundi, based at Bristol Robotics Laboratory, is leading the research and will analyse the functions, features and mechanisms of the human knee in order to translate this information into a new bio-inspired procedure for designing prosthetics.
And the medical innovations didn’t stop there in June. How about Robot snakes slithering into the delicate field of heart surgery? Or a robotic doctor that can be controlled hundreds of kilometres away by a human counterpart? Getting a check-up from a robot may sound like something from a sci-fi film, but scientists are closing in on this real-life scenario and have already tested a prototype.
An aging population means the age-dependency ratio—the proportion of the elderly compared with the number of workers—will almost double from 28.8 % in 2015 to 51 % in 2080, straining healthcare systems and national budgets. The creators of one humanoid robot (below) under development for the elderly say it can understand people’s actions and learn new behaviors in response, even though it is devoid of arms. Robots can be programmed to understand an elderly person’s preferences and habits to detect changes in behavior: for example if a yoga devotee misses a class, it will ask why, while if an elderly person falls it will automatically alert caregivers or emergency services.
Self-driving news
June was a huge month of wheeling and dealing in the self-driving cars industry. NuTonomy, a small Boston startup that makes software for self-driving cars, has launched a research-and-development partnership with San Francisco’s Lyft Inc., the second-largest ride-hailing company in the United States. It’s the latest alliance between Lyft and a maker of autonomous vehicle technology, and could boost nuTonomy’s efforts to become a major force in self-driving vehicles. Lyft chief executive Logan Green said the partnership “could lead to thousands of Lyft cars on the nuTonomy platform.”
Meanwhile, competitors Uber, the global ride-sharing transportation company, named two replacements to recover from the recent firing of Anthony Levandowski who headed their Advanced Technologies Group, their OTTO trucking unit, and their self-driving team. Levandowski was fired May 30th. Eric Meyhofer and Anthony Levandowski will pick up the slack.
In other high-level firing news, Tesla Inc. has parted ways with another senior leader on its self-driving technology team, adding more turmoil to a program that is under pressure to meet the grand ambitions of Chief Executive Elon Musk. The Silicon Valley electric-car maker said Chris Lattner—head of development of Tesla’s Autopilot program—left his post as after he and Musk failed to see eye to eye on some important issues during Lattner’s six months in post.
Meanwhile, Waymo is done driving around the cute, steering-wheel-free autonomous cars that were introduced by Google back in 2014. In a blog post, Waymo leaders write that time has come to “retire our fleet of Fireflies” —their name for the tiny cars—and focus instead of integrating self-driving technology into other vehicles, like the Chrysler Pacifica minivans Waymo put on the road earlier this year.
In the UK, Venturer driverless car project published results of their first trials. Venturer is the first Connected and Autonomous Vehicle project to start in the UK. The results of Venturer’s preliminary trials show that the handover process is a safety critical issue in the development of Autonomous Vehicles. The first Venturer trials set out to investigate ‘takeover’ (time taken to reengage with vehicle controls) and ‘handover’ (time taken to regain a baseline/normal level of driving behavior and performance) when switching frequently between automated and manual driving modes within urban and extra-urban settings. This trial is believed to be the first to directly compare handover to human driver-control from autonomous mode in both simulator and autonomous road vehicle platforms.
Honda was the latest automaker to commit to an ambitious self-driving car goal in June. It wants cars with SAE Level 4 autonomy on the road by 2025, CEO Takahiro Hachigo announced at a media event in Japan. This news will likely stoke a fire underneath other Japanese self-driving car developers, so stay tuned for lots of new developments.
Auto supplier Robert Bosch GmbH will build a 1 billion-euro ($1.1 billion) semiconductor plant, the biggest single investment in its history, as the maker of brakes and engines prepares for a surge in demand for components used in self-driving vehicles.
Tutorials
Robohub showcased some great tutorials last month. Here’s one: The Robot Academy is a new learning resource from Professor Peter Corke and the Queensland University of Technology, the team behind the award-winning Introduction to Robotics and Robotic Vision courses. There are over 200 lessons available, all for free. The courses were designed for university undergraduate students but many lessons are suitable for anybody! So get stuck in.
Florian Enner offered this useful tutorial on programming: “Using MATLAB for hardware-in-the-loop prototyping #1 : Message passing systems”. Check it out here.
And Ricardo Téllez offered an article on “Teaching ROS quickly to students“—a novel method of teaching to class of students in a rapid time frame.
Enjoy!
Upcoming events for July– August 2017
CIROS: July 5, 2017 — July 8, 2017, Shanghai, China
ASCEND Conference and Expo: July 19, 2017 — July 21, 2017, Portland, OR
RoboCup: July 25, 2017 — July 31, 2017, Nagoya, Japan
Farm Progress Show: August 29, 2017 — August 31, 2017, Decatur, IL
World of Drones Congress (WoDC): August 31, 2017 — September 2, 2017, Brisbane, Australia
The two days of conference will focus on topics that cut across many of the issues and disciplines involved in the future of AI: narratives and trust.
Keynote speakers include Professor Stuart Russell (Berkeley), Baroness Onora O’Neill (Cambridge), Dr Claire Craig (Royal Society), Matt Hancock (MP) and Professor Francesca Rossi (University of Padova, Italy).
You can watch the live stream here, or follow the tweets below at #CFIConf.
SoftBank’s Pepper humanoid robot operation (a joint venture with Foxconn, Alibaba and SoftBank) has incurred a big $274 million loss while Asia more than doubled the amount of funding for tech startups thus far in 2017. No one ever said VC funding was for the faint of heart.
The Ups
According to PwC and CB Insights, venture capital investments in Asia in the first six months of 2017 totaled $28.8 billion. VC investments in North America for the same period totaled $18.4 billion.
CB Insights reports that 45% of all dollars invested in tech in 2017 went to Asian firms.
Largest deals in Asia so far this year included Didi Chuxing raising $5.5 billion, One97 Communications ($1.4 billion), GO-JEK ($1.2 billion), Bytedance ($1 billion) and Ele.me ($1 billion).
Largest deals in North America in the quarter included San Francisco-based Lyft – which raised $600 million, Outcome Health ($500 million), Group Nine Media ($485 million), Houzz ($400 million), and Guardant Health ($360 million).
The number of deals around the world, as shown in the chart above, remains heavily in the West. Almost every day the news reports another fund being set up to invest in one area of tech or another. For example, Toyota Motor Corp today announced a $100 million fund (Toyota AI Ventures) for AI and robotics startups and have already made some initial investments. The first three are for a maker of cameras that monitor drivers and roads, a creator of autonomous car-mapping algorithms, and a developer of robotic companions for the elderly.
The Downs
Nikkei Asian Review reports on SoftBank Robotics’ $274 million loss which they attribute to the Pepper humanoid robot joint venture with Alibaba and Foxconn. The subsidiary was established in 2014 and began consumer sales of Pepper in June 2015 and business sales that October.
“Although the company does not release earnings, it recorded sales of 2.2 billion yen and a net loss of 11.7 billion yen in fiscal 2015, according to Tokyo Shoko Research. That is markedly worse than the 2.3 billion yen net loss from fiscal 2014. 'Pepper is unprofitable because of its relatively low price for a humanoid robot, costing just 198,000 yen ($1,750), which cannot cover development costs.'”
A SoftBank PR statement said that they will increase corporate sales and improve earnings through related businesses such as apps and content and that sales are good.
In recent years, engineers have worked to shrink drone technology, building flying prototypes that are the size of a bumblebee and loaded with even tinier sensors and cameras. Thus far, they have managed to miniaturize almost every part of a drone, except for the brains of the entire operation — the computer chip.
Standard computer chips for quadcoptors and other similarly sized drones process an enormous amount of streaming data from cameras and sensors, and interpret that data on the fly to autonomously direct a drone’s pitch, speed, and trajectory. To do so, these computers use between 10 and 30 watts of power, supplied by batteries that would weigh down a much smaller, bee-sized drone.
Now, engineers at MIT have taken a first step in designing a computer chip that uses a fraction of the power of larger drone computers and is tailored for a drone as small as a bottlecap. They will present a new methodology and design, which they call “Navion,” at the Robotics: Science and Systems conference, held this week at MIT.
The team, led by Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics at MIT, and Vivienne Sze, an associate professor in MIT’s Department of Electrical Engineering and Computer Science, developed a low-power algorithm, in tandem with pared-down hardware, to create a specialized computer chip.
The key contribution of their work is a new approach for designing the chip hardware and the algorithms that run on the chip. “Traditionally, an algorithm is designed, and you throw it over to a hardware person to figure out how to map the algorithm to hardware,” Sze says. “But we found by designing the hardware and algorithms together, we can achieve more substantial power savings.”
“We are finding that this new approach to programming robots, which involves thinking about hardware and algorithms jointly, is key to scaling them down,” Karaman says.
The new chip processes streaming images at 20 frames per second and automatically carries out commands to adjust a drone’s orientation in space. The streamlined chip performs all these computations while using just below 2 watts of power — making it an order of magnitude more efficient than current drone-embedded chips.
Karaman, says the team’s design is the first step toward engineering “the smallest intelligent drone that can fly on its own.” He ultimately envisions disaster-response and search-and-rescue missions in which insect-sized drones flit in and out of tight spaces to examine a collapsed structure or look for trapped individuals. Karaman also foresees novel uses in consumer electronics.
“Imagine buying a bottlecap-sized drone that can integrate with your phone, and you can take it out and fit it in your palm,” he says. “If you lift your hand up a little, it would sense that, and start to fly around and film you. Then you open your hand again and it would land on your palm, and you could upload that video to your phone and share it with others.”
Karaman and Sze’s co-authors are graduate students Zhengdong Zhang and Amr Suleiman, and research scientist Luca Carlone.
From the ground up
Current minidrone prototypes are small enough to fit on a person’s fingertip and are extremely light, requiring only 1 watt of power to lift off from the ground. Their accompanying cameras and sensors use up an additional half a watt to operate.
“The missing piece is the computers — we can’t fit them in terms of size and power,” Karaman says. “We need to miniaturize the computers and make them low power.”
The group quickly realized that conventional chip design techniques would likely not produce a chip that was small enough and provided the required processing power to intelligently fly a small autonomous drone.
“As transistors have gotten smaller, there have been improvements in efficiency and speed, but that’s slowing down, and now we have to come up with specialized hardware to get improvements in efficiency,” Sze says.
The researchers decided to build a specialized chip from the ground up, developing algorithms to process data, and hardware to carry out that data-processing, in tandem.
Tweaking a formula
Specifically, the researchers made slight changes to an existing algorithm commonly used to determine a drone’s “ego-motion,” or awareness of its position in space. They then implemented various versions of the algorithm on a field-programmable gate array (FPGA), a very simple programmable chip. To formalize this process, they developed a method called iterative splitting co-design that could strike the right balance of achieving accuracy while reducing the power consumption and the number of gates.
A typical FPGA consists of hundreds of thousands of disconnected gates, which researchers can connect in desired patterns to create specialized computing elements. Reducing the number gates with co-design allowed the team to chose an FPGA chip with fewer gates, leading to substantial power savings.
“If we don’t need a certain logic or memory process, we don’t use them, and that saves a lot of power,” Karaman explains.
Each time the researchers tweaked the ego-motion algorithm, they mapped the version onto the FPGA’s gates and connected the chip to a circuit board. They then fed the chip data from a standard drone dataset — an accumulation of streaming images and accelerometer measurements from previous drone-flying experiments that had been carried out by others and made available to the robotics community.
“These experiments are also done in a motion-capture room, so you know exactly where the drone is, and we use all this information after the fact,” Karaman says.
Memory savings
For each version of the algorithm that was implemented on the FPGA chip, the researchers observed the amount of power that the chip consumed as it processed the incoming data and estimated its resulting position in space.
The team’s most efficient design processed images at 20 frames per second and accurately estimated the drone’s orientation in space, while consuming less than 2 watts of power.
The power savings came partly from modifications to the amount of memory stored in the chip. Sze and her colleagues found that they were able to shrink the amount of data that the algorithm needed to process, while still achieving the same outcome. As a result, the chip itself was able to store less data and consume less power.
“Memory is really expensive in terms of power,” Sze says. “Since we do on-the-fly computing, as soon as we receive any data on the chip, we try to do as much processing as possible so we can throw it out right away, which enables us to keep a very small amount of memory on the chip without accessing off-chip memory, which is much more expensive.”
In this way, the team was able to reduce the chip’s memory storage to 2 megabytes without using off-chip memory, compared to a typical embedded computer chip for drones, which uses off-chip memory on the order of a few gigabytes.
“Any which way you can reduce the power so you can reduce battery size or extend battery life, the better,” Sze says.
This summer, the team will mount the FPGA chip onto a drone to test its performance in flight. Ultimately, the team plans to implement the optimized algorithm on an application-specific integrated circuit, or ASIC, a more specialized hardware platform that allows engineers to design specific types of gates, directly onto the chip.
“We think we can get this down to just a few hundred milliwatts,” Karaman says. “With this platform, we can do all kinds of optimizations, which allows tremendous power savings.”
This research was supported, in part, by Air Force Office of Scientific Research and the National Science Foundation.
Join us at the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) for a full day workshop that will bring together international stakeholders in robotics to examine best practices for accelerating robotics innovation through strategic policy frameworks.
IROS Workshop | 8:30-17:00 September 28, 2017 | Vancouver Convention Centre | Vancouver BC
This is a unique opportunity to learn from people who have played a significant role in designing and implementing major strategic robotics initiatives around the globe.
Objectives
In the past decade, a number of governing bodies and industry consortia have developed strategic roadmaps to guide investment and development of robotic technology. With the roadmaps from the US, South Korea, Japan and EU etc. well underway, the time is right to take stock of these strategic robotics initiatives to see what is working, what is not, and what best practices in roadmap development might be broadly applied to other regions.
The objective of this two-part workshop is to examine the process of how these policy frameworks came to be created in the first place, how they have been tailored to local capabilities and strengths, and what performance indicators are being used to measure their success — so that participants may draw from international collective experience as they design and evaluate strategic robotics initiatives for their own regions.
Program Part ONE — Morning Session: “Developing innovation policy for robotics, and establishing key performance indicators that are relevant to your region”
The morning session will feature international speakers who have played a significant role in launching and shaping major strategic robotics initiatives across the globe. The focus of this session will be on the history and process of designing the roadmap (rather than on merely presenting the roadmap itself) and key performance indicators of roadmap success. Via presentations and panel discussion, the outcome of this session will be an exploratory overview of best practices and key performance metrics, so that participants can apply knowledge gained from the workshop as they design strategic robotics policy frameworks for their own regional or national contexts.
Program Part TWO — Afternoon Session: “Towards a national robotics strategy for Canada”
The afternoon session will bring together leading Canadian robotics experts from academia, industry, federal/provincial policy, and the national research council to discuss and strategize the future of robotics in Canada, with an emphasis on addressing the social, economic, legal/ethical and regulatory issues, and the robotics strengths and capabilities specific to this country. The main goals of this session will be to 1) establish a clear picture of the internal Canadian robotics landscape and how it compares to other nations worldwide, 2) discuss lessons learned in the International session within the Canadian context, and 3) discuss and identify gaps and opportunities for a Canadian initiative. Ultimately the Canadian session will serve as a venue for collecting data and viewpoints to support the development of a Canadian Robotics Roadmap.
Who should attend
This workshop is open to all members of academia, government, and industry with an interest in funding, policy and research strategy.
Part One of the workshop (morning session) will be broadly applicable to anyone with an interest in robotics policy, partnerships and funding.
Part Two (afternoon session) will be of particular interest to Canadian conference-goers as well as those who are interested in Canadian research and industry partnerships.
Call for participation We are actively seeking participants for this workshop. If you are
involved in developing or evaluating a major strategic robotics initiative in your region and would like to participate in our international discussion, OR
are a member of the Canadian robotics ecosystem or are a Canadian roboticist living abroad who would like to be involved in a national robotics strategy for Canada
then please send an email with your expression of interest to: info@canadianroboticsnetwork.com. We look forward to hearing from you!!
Workshop Organizers
Elizabeth Croft, Director, Collaborative Advanced Robotics and Intelligent Systems (CARIS) Lab, UBC
Clément Gosselin, Professor, Laboratoire de Robotique, Université Laval
Paul Johnston, Research Policy Consultant, former President of Precarn Incorporated
Dana Kulić, Associate Professor, Adaptive Systems Laboratory, University of Waterloo
AJung Moon, Co-Founder & Director, Open Roboethics Institute
Angela Schoellig, Assistant Professor, Institute for Aerospace Studies (UTIAS), University of Toronto
Hallie Siegel, Strategic Foresight & Innovation @ OCAD U, former Managing Editor @Robohub
The U.S. Army is developing a drone that moves like a flying squirrel. Credit: David McNally/U.S. Army
News
A U.S. drone strike in Somalia targeted members of al-Shabab. It is the second drone strike in Somalia since President Trump relaxed rules for targeting members of the al-Qaeda-allied group. (New York Times)
The U.S. Federal Aviation Administration is offering refunds to drone hobbyists who paid the $5 fee to register with the agency. The move follows a federal court ruling in May that found that the FAA could not compel recreational drone users to register. The FAA has collected over $4 million in fees since it implemented the registration policy in December 2015. (Recode)
In a report published by the Mitchell Institute, Gen. David Deptula argues that the U.S. Department of Defense should create an office for unmanned aircraft to coordinate efforts across the different services. (Breaking Defense)
In Drone Warrior, Brett Velicovich and Christopher S. Stewart offer an insider’s account of running U.S. targeted killing operations. (Wired)
At Poynter, Melody Kramer considers how recent aerial images of New Jersey Gov. Chris Christie on an empty beach demonstrate how drones are becoming effective newsgathering tools.
At Drone360, Lauren Sigfusson looks at how the FAA’s Part 107 waiver and authorization process has changed in recent weeks.
At Arkansas Matters, Chris Pulliam says that crop dusters are concerned about the potential threats posed to their aircraft by drones.
At DefenseNews, Burak Ege Bekdil writes that Turkey is increasingly relying on drones for border security, counterterrorism, and operations against Kurdish groups.
At the National Interest, Samuel Bendett considers whether Russia will ever be able to catch up with the U.S. and Israel in terms of drone development.
In a letter to the editor of the Pocono Record, Pete Sauvigne argues that a local model aircraft club should not be stripped of its permission to fly in the Delaware Water Gap National Recreation Area.
At The Drive, Marco Margaritoff looks at a few of National Geographic’s best drone photos of year to date. Know Your Drone
A group of researchers in Florida is developing an underwater drone that seeks out and collects lionfish, an invasive species in the area. (Pensacola News Journal)
The Russian Navy is reportedly displeased with the performance of its Inspector Mk2 unmanned surface vehicle developed by French firm ECA Group. (Mil.Today)
Officials in Kaziranga National Park in India are using drones to monitor wildlife displaced by recent floods in the area. (Hindustan Times)
The U.S. Army awarded Assist Consultants a $18.1 million contract to build a facility for the Navy’s MQ-4C Triton at Al Dhafra Air Base in the United Arab Emirates. (FBO)
The Department of Justice awarded AARDVARK a $51,247 contract for backpackable robots. (FBO)
Germany’s Bayer CropScience awarded SlantRange, a U.S. company that makes sensors for drones, a contract to collect data on crop breeding and research programs. (Agriculture.com)
Meanwhile, Indonesia and Turkey agreed to cooperate on the development of military systems and technologies, including drones. (IHS Jane’s Defence Weekly)
Kratos Defense & Security Solutions stock prices lifted 11 percent in June, due in part to news of a sale of attritable unmanned aircraft. (Motley Fool)
The Israeli military awarded Duke Robotics, a Florida-based startup, a contract for the TIKAD, a quadrotor drone that can be armed with a machine gun or grenade launcher. (Defense One)
For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.
by Joe Dodgshun
Drone innovators are transforming the way we watch events, from football matches and boat races to music festivals.
Anyone who has watched coverage of a festival or sports event in the last few years will probably have witnessed commercial drone use — in the form of breathtaking aerial footage.
But a collaboration of universities, research institutes and broadcasters is looking to take this to the next level by using a small swarm of intelligent drones.
The EU-funded MULTIDRONE project seeks to create teams of three to five semi-automated drones that can react to and capture unfolding action at large-scale sports events.
Project coordinator Professor Ioannis Pitas, of the University of Bristol, UK, says the collaboration aims to have prototypes ready for testing by its media partners Deutsche Welle and Rai – Radiotelevisione Italiana within 18 months.
‘Deutsche Welle has two potential uses lined up – filming the Rund um Wannsee boat race in Berlin, Germany, and also filming football matches with drones instead of normal cameras – while Rai is interested in covering cycling races,’ said Prof. Pitas.
‘We think we have the potential to offer a much better film experience at a reduced cost compared to helicopters or single drones, producing a new genre in drone cinematography.’
‘We have the potential to offer a much better film experience at a reduced cost compared to helicopters or single drones, producing a new genre in drone cinematography.’
Professor Ioannis Pitas, University of Bristol, UK
But before they can chase the leader of the Tour de France, MULTIDRONE faces the hefty challenge of creating AI that allows its drones to safely carry out a mission as a team.
Prof. Pitas says safety is the utmost priority, so the drones will include advanced crowd avoidance mechanisms and the ability to make emergency landings.
And it’s not just safety in the case of bad weather, a flat battery or a rogue football.
‘Security of communications is important as a drone could otherwise be hijacked, not just undermining privacy but also raising the possibility that it could be used as a weapon,’ said Prof. Pitas.
The early project phase will have a strong focus on ethics to prevent any issues around privacy.
‘People are sensitive about drones and about being filmed and we’re approaching this in three ways — trying to avoid shooting over private spaces, getting consent from the athletes being followed, and creating mechanisms that decide which persons to follow and blur other faces.’
If they can pull it off, he predicts a huge boost for the European entertainment industry and believes it could lead to much larger drone swarms capable of covering city-wide events.
Drones-on-demand
According to Gartner research, sales of commercial-use drones are set to jump from 110 000 units in 2016 to 174 000 this year. Although 2 million toy drones were snapped up last year for USD 1.7 billion, the commercial market dwarfed this at USD 2.8 billion.
Aside from pure footage, drones have also proven their worth in research, disaster response, construction and even in monitoring industrial assets.
One company trying to open up the market to those needing a sky-high helping hand is Integra Aerial Services, a young drones-as-a-service company.
An offshoot of Danish aeronautics firm Integra Holding Group, INAS, was launched in 2014 thanks to an EU-backed feasibility study.
INAS has more than 25 years of experience in aviation and used its knowledge of the sector’s legislation to shape a business model targeting heavier, more versatile drones weighing up to 25 kilogrammes. And they have already been granted a commercial drone operating license by the Danish Civil Aviation Authority.
These bigger drones have far more endurance than typical toy drones, which can weigh anywhere from 250 grams to several kilos. INAS CEO Gilles Fartek says their bigger size means they can carry multiple sensors, thus collecting all the needed data in one fell swoop, instead of across multiple flights.
For example, one of their drones flies LIDAR (Light Detection and Ranging) radar over Greenland to measure ice thickness as a measure of climate change, but could also carry a 100 megapixel, high-definition camera.
While INAS spends most of the Arctic summer running experiments from the remote host Station Nord in Greenland, Fartek says they’re free to use the drones for different projects in other seasons, mostly in areas of environmental research, mapping and agricultural monitoring.
‘You can’t match the quality of data for the price, but drone-use regulations in Europe are still quite complicated and make between-country operations almost impossible,’ said Fartek.
‘The paradox is that you have an increasing demand for such civil applications across Europe and even in institutional areas like civil protection and maritime safety where they cannot use military drones.’
A single European sky
These issues, and more, should soon be addressed by SESAR, the project which coordinates all EU research and development activities in air traffic management. SESAR plans to deploy a harmonised approach to European airspace management by 2030 in order to meet a predicted leap in air traffic.
Recently SESAR unveiled its blueprint outlining how it plans to make drone use in low-level airspace safe, secure and environmentally friendly. They hope this plan will be ready by 2019, paving the way for an EU drone services market by safely integrating highly automated or autonomous drones into low-level airspace of up to 150 metres.
Modelled after manned aviation traffic management, the plan will include registration of drones and operators, provide information for autonomous drone flights and introduce geo-fencing to limit areas where drones can fly.
The Issue
Emerging drone sectors range from delivery services, collecting industry data, infrastructure inspections, precision agriculture, transportation and logistics.
The market for drone services is expected to grow substantially in the coming years with an estimated worth of EUR 10 billion by 2035.
To support high-potential small- and medium-sized enterprises (SMEs), the European Commission has allocated EUR 3 billion over the period 2014-2020. A further EUR 17 billion was set aside under the Industrial Leadership pillar of the EU’s current research funding programme Horizon 2020.
Two reputable research resources are reporting that the robotics industry is growing more rapidly than expected. BCG (Boston Consulting Group) is conservatively projecting that the market will reach $87 billion by 2025; Tractica, incorporating the robotic and AI elements of the emerging self-driving industry, is forecasting the market will reach $237 billion by 2022.
Both research firms acknowledge that yesterday’s robots — which were blind, big, dangerous and difficult to program and maintain — are being replaced and supplemented with newer, more capable ones. Today's new, and future robots, will have voice and language recognition, access to super-fast communications, data and libraries of algorithms, learning capability, mobility, portability and dexterity. These new precision robots can sort and fill prescriptions, pick and pack warehouse orders, sort, inspect, process and handle fruits and vegetables, plus a myriad of other industrial and non-industrial tasks, most faster than humans, yet all the while working safely along side them.
BCG suggests that business executives be aware of ways robots are changing the global business landscape and think and act now. They see robotics-fueled changes coming in retail, logistics, transportation, healthcare, food processing, mining and agriculture.
BCG cites the following drivers:
Private investment in the robotic space has continued to amaze with exponential year-over-year funding curves and sensational billion dollar acquisitions.
Prices continue to fall on robots, sensors, CPUs and communications while capabilities continue to increase.
Robot programming is being transformed by easier interfaces, GUIs and ROS.
The prospect of a self-driving vehicles industry disrupting transportation is propelling a talent grab and strategic acquisitions by competing international players with deep pockets.
40% of robotic startups have been in the consumer sector and will soon augment humans in high-touch fields such as health and elder care.
BCG also cites the following example of paying close attention to gain advantage:
“Amazon gained a first-mover advantage in 2012 when it bought Kiva Systems, which makes robots for warehouses. Once a Kiva customer, Amazon acquired the robot maker to improve the productivity and margins of its network of warehouses and fulfillment centers. The move helped Amazon maintain its low costs and expand its rapid delivery capabilities. It took five years for a Kiva alternative to hit the market. By then, Amazon had a jump on its rivals and had developed an experienced robotics team, giving the company a sustainable edge.”
The key story is that industrial robotics, the traditional pillar of the robotics market, dominated by Japanese and European robotics manufacturers, has given way to non-industrial robot categories like personal assistant robots, UAVs, and autonomous vehicles, with the epicenter shifting toward Silicon Valley, which is now becoming a hotbed for artificial intelligence (AI), a set of technologies that are, in turn, driving a lot of the most significant advancements in robotics. Consequently, Tractica forecasts that the global robotics market will grow rapidly between 2016 and 2022, with revenue from unit sales of industrial and non-industrial robots rising from $31 billion in 2016 to $237.3 billion by 2022. The market intelligence firm anticipates that most of this growth will be driven by non-industrial robots.
Tractica is headquartered in Boulder and analyzes global market trends and applications for robotics and related automation technologies within consumer, enterprise, and industrial marketplaces and related industries.
General Research Reports
Global autonomous mobile robots market, June 2017, 95 pages, TechNavio, $2,500
TechNavio forecasts that the global autonomous mobile robots market will grow at a CAGR of more than 14% through 2021.
Global underwater exploration robots, June 2017, 70 pages, TechNavio, $3,500
TechNavio forecasts that the global underwater exploration robots market will grow at a CAGR of 13.92 % during the period 2017-2021.
Household vacuum cleaners market, March 2017, 134 pages, Global Market Insights, $4,500
Global Market Insights forecasts that household vacuum cleaners market size will surpass $17.5 billion by 2024 and global shipments are estimated to exceed 130 million units by 2024, albeit at a low 3.0% CAGR. Robotic vacuums show a slightly higher growth CAGR.
Global unmanned surface vehicle market, June 2017, Value Market Research, $3,950
Value Market Research analyzed drivers (security and mapping) versus restraints such as AUVs and ROVs and made their forecasts for the period 2017-2023.
Top technologies in advanced manufacturing and automation, April 2017, Frost & Sullivan, $4,950
This Frost & Sullivan report focuses on exoskeletons, metal and nano 3D printing, co-bots and agile robots – all of which are in the top 10 technologies covered.
Mobile robotics market, December 2016, 110 pages, Zion Market Research, $4,199
Global mobile robotics market will reach $18.8 billion by end of 2021, growing at a CAGR of slightly above 13.0% between 2017 and 2021.
Unmanned surface vehicle (USV) market, May 2017, MarketsandMarkets, $5,650
MarketsandMarkets forecasts the unmanned surface vehicle (USV) market to grow from $470.1 Million in 2017 to $938.5 Million by 2022, at a CAGR of 14.83%.
Agricultural Research Reports
Global agricultural robots market, May 2017, 70 pages, TechNavio, $2,500
Forecasts the global agricultural robots market will grow steadily at a CAGR of close to 18% through 2021.
Agriculture robots market, June 2017, TMR Research, $3,716
Robots are poised to replace agricultural hands. They can pluck fruits, sow and reap crops, and milk cows. They carry out the tasks much faster and with a great degree of accuracy. This coupled with mandates on higher minimum pay being levied in most countries, have spelt good news for the global market for agriculture robots.
Agricultural Robots, December 2016, 225 pages, Tractica, $4,200
Forecasts that shipments of agricultural robots will increase from 32,000 units in 2016 to 594,000 units annually in 2024 and that the market is expected to reach $74.1 billion in annual revenue by 2024. Report, done in conjunction with The Robot Report, profiles over 165 companies involved in developing robotics for the industry.
The Second Edition of the award-winning Springer Handbook of Robotics edited by Bruno Siciliano and Oussama Khatib has recently been published. The contents of the first edition have been restructured to achieve four main objectives: the enlargement of foundational topics for robotics, the enlightenment of design of various types of robotic systems, the extension of the treatment on robots moving in the environment, and the enrichment of advanced robotics applications. Most previous chapters have been revised, fifteen new chapters have been introduced on emerging topics, and a new generation of authors have joined the handbook’s team. Like for the first edition, a truly interdisciplinary approach has been pursued in line with the expansion of robotics across the boundaries with related disciplines. Again, the authors have been asked to step outside of their comfort zone, as the Editorial Board have teamed up authors who never worked together before.
No doubt one of the most innovative elements is the inclusion of multimedia content to leverage the valuable written content inside the book. Under the editorial leadership of Torsten Kröger, a web portal has been created to host the Multimedia Extension of the book, which serves as a quick one-stop shop for more than 700 videos associated with the specific chapters. In addition, video tutorials have been created for each of the seven parts of the book, which benefit everyone from PhD students to seasoned robotics experts who have been in the field for years. A special video related to the contents of the first chapter shows the journey of robotics with the latest and coolest developments in the last 15 years. As publishing explores new interactive technologies, an App has been made available in the Google/IOS stores to introduce an additional multimedia layer to the reader’s experience. With the app, readers can use the camera on their smartphone or tablet, hold it to a page containing one or more special icons, and produce an augmented reality on the screen, watching videos as they read along the book.
The Multimedia Portal offers free access to more than 700 accompanying videos. In addition, a Multimedia App is now downloadable: AppStore and GooglePlay for smartphones and tablets, allowing you to easily access multimedia content while reading the book.
In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda).
Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features CYBERLEGs++: The CYBERnetic LowEr-Limb CoGnitive Ortho-prosthesis Plus Plus.
Objectives
The goal of CYBERLEGs++ is to validate the technical and economic viability of the powered robotic ortho-prosthesis developed within the FP7-ICT-CYBERLEGs project. The aim is to enhance/restore the mobility of transfemoral amputees and to enable them to perform locomotion tasks such as ground-level walking, walking up and down slopes, climbing/descending stairs, standing up, sitting down and turning in scenarios of real life. Restored mobility will allow amputees to perform physical activity thus counteracting physical decline and improving the overall health status and quality of life.
Expected Impact
By demonstrating in an operational environment (TRL=7) – from both the technical and economic viability view point – a modular robotics technology for healthcare, with the ultimate goal of fostering its market exploitation CYBERLEGs Plus Pus will have an impact on:
Society: CLs++ technology will contribute to increase the mobility of dysvascular amputees, and, more generally, of disabled persons with mild lower-limb impairments;
Science and technology: CLs++ will further advance the hardware and software modules of the ortho-prosthesis developed within the FP7 CYBERLEGs project and validate its efficacy through a multi-centre clinical study;
Market: CLs++ will foster the market exploitation of high-tech robotic systems and thus will promote the growth of both a robotics SME and a large healthcare company.
Partners
SCUOLA SUPERIORE SANT’ANNA (SSSA)
UNIVERSITÉ CATHOLIQUE DE LOUVAIN (UCL)
VRIJE UNIVERSITEIT BRUSSEL (VUB)
UNIVERZA V LJUBLJANI (UL)
FONDAZIONE DON CARLO GNOCCHI (FDG)
ÖSSUR (OSS)
IUVO S.R.L. (IUVO)
Coordinator
Prof. Nicola Vitiello, The BioRobotics Institute
Scuola Superiore Sant’Anna, Pisa, Italy
nicola.vitiello@santannapisa.it
The need for fast, accurate 3D mapping solutions has quickly become a reality for many industries wanting to adopt new technologies in AI and automation. New applications requiring these 3D mapping platforms include surveillance, mining, automated measurement & inspection, construction management & decommissioning, and photo-realistic rendering. Here at Clearpath Robotics, we decided to team up with Mandala Robotics to show how easily you can implement 3D mapping on a Clearpath robot.
3D Mapping Overview
3D mapping on a mobile robot requires Simultaneous Localization and Mapping (SLAM), for which there are many different solutions available. Localization can be achieved by fusing many different types of pose estimates. Pose estimation can be done using combinations of GPS measurements, wheel encoders, inertial measurement units, 2D or 3D scan registration, optical flow, visual feature tracking and others techniques. Mapping can be done simultaneously using the lidars and cameras that are used for scan registration and for visual position tracking, respectively. This allows a mobile robot to track its position while creating a map of the environment. Choosing which SLAM solution to use is highly dependent on the application and the environment to be mapped. Although many 3D SLAM software packages exist and cannot all be discussed here, there are few 3D mapping hardware platforms that offer full end-to-end 3D reconstruction on a mobile platform.
Existing 3D Mapping Platforms
We will briefly highlight some more popular alternatives of commercialized 3D mapping platforms that have one or many lidars, and in some cases optical cameras, for point cloud data collection. It is important to note that there are two ways to collect a 3D point cloud using lidars:
1. Use a 3D lidar which consists of one device with multiple stacked horizontally laser beams
2. Tilt or rotate a 2D lidar to get 3D coverage
Tilting of a 2D lidar typically refers to back-and-forth rotating of the lidar about its horizontal plane, while rotating usually refers to continuous 360 degree rotation of a vertically or horizontally mounted lidar.
Example 3D Mapping Platforms: 1. MultiSense SL (Left) by Carnegie Robotics, 2. 3DLS-K (Middle) by Fraunhofer IAIS Institute, 3. Cartographer Backpack (Right) by Google.
1. MultiSense SL
The MultiSense SL was developed by Carnegie Robotics and provides a compact and lightweight 3D data collection unit for researchers. The unit has a tilting Hokuyo 2D lidar, a stereo camera, LED lights, and is pre-calibrated for the user. This allows for the generation of coloured point clouds. This platform comes with a full software development kit (SDK), open source ROS software, and is the sensor of choice for the DARPA Robotics Challenge for humanoid robots.
2. 3DLS-K
The 3DLS-K is a dual-tilting unit made by Fraunhofer IAIS Institute with the option of using SICK LMS-200 or LMS-291 lidars. Fraunhofer IAIS also offers other configurations with continuously rotating 2D SICK or Hokuyo lidars. These systems allow for the collection of non-coloured point clouds. With the purchase of these units, a full application program interface (API) is available for configuring the system and collecting data.
3. Cartographer Backpack
The Cartographer Backpack is a mapping unit with two static Hokuyo lidars (one horizontal and one vertical) and an on-board computer. Google released cartographer software as an open source library for performing 3D mapping with multiple possible sensor configurations. The Cartographer Backpack is an example of a possible configuration to map with this software. Cartographer allows for integration of multiple 2D lidars, 3D lidars, IMU and cameras, and is also fully supported in ROS. Datasets are also publicly available for those who want to see mapping results in ROS.
Mandala Mapping – System Overview
Thanks to the team at Mandala Robotics, we got our hands on one of their 3D mapping units to try some mapping on our own. This unit consists of a mount for a rotating vertical lidar, a fixed horizontal lidar, as well as an onboard computer with an Nvidia GeForce GTX 1050 Ti GPU. The horizontal lidar allows for the implementation of 2D scan registration as well as 2D mapping and obstacle avoidance. The vertical rotating lidar is used for acquiring the 3D point cloud data. In our implementation, real-time SLAM was performed solely using 3D scan registration (more on this later) specifically programmed for full utilization of the onboard GPU. The software used to implement this mapping can be found on the mandala-mapping github repository.
Scan registration is the process of combining (or stitching) together two subsequent point clouds (either in 2D or 3D) to estimate the change in pose between the scans. This results in motion estimates to be used in SLAM and also allows a new point cloud to be added to an existing in order to build a map. This process is achieved by running iterative closest point (ICP) between the two subsequent scans. ICP performs a closest neighbour search to match all points from the reference scan to a point on the new scan. Subsequently, optimization is performed to find rotation and translation matrices that minimise the distance between the closest neighbours. By iterating this process, the result converges to the true rotation and translation that the robot underwent between the two scans. This is the process that was used for 3D mapping in the following demo.
Mandala Robotics has also released additional examples of GPU computing tasks useful for robotics and SLAM. These examples can be found here.
Mandala Mapping Results
The following video shows some of our results from mapping areas within the Clearpath office, lab and parking lot. The datasets collected for this video can be downloaded here.
The Mandala Mapping software was very easy to get up and running for someone with basic knowledge in ROS. There is one launch file which runs the Husky base software as well as the 3D mapping. Initiating each scan can be done by sending a simple scan request message to the mapping node, or by pressing one button on the joystick used to drive the Husky. Furthermore, with a little more ROS knowledge, it is easy to incorporate autonomy into the 3D mapping. Our forked repository shows how a short C++ script can be written to enable constant scan intervals while navigating in a straight line. Alternatively, one could easily incorporate 2D SLAM such as gmapping together with the move_base package in order to give specific scanning goals within a map.
Why use Mandala Mapping on your robot?
If you are looking for a quick and easy way to collect 3D point clouds, with the versatility to use multiple lidar types, then this system is a great choice. The hardware work involved with setting up the unit is minimal and well documented, and it is preconfigured to work with your Clearpath Husky. Therefore, you can be up and running with ROS in a few days! The mapping is done in real time, with only a little lag time as your point cloud size grows, and it allows you to visualize your map as you drive.
The downside to this system, compared to the MultiSense SL for example, is that you cannot yet get a coloured point cloud since no cameras have been integrated into this system. However, Mandala Robotics is currently in the beta testing stage for a similar system with an additional 360 degree camera. This system uses the Ladybug5 and will allow RGB colour to be mapped to each of the point cloud elements. Keep an eye out for a future Clearpath blogs in case we get our hands on one of these systems! All things considered, the Mandala Mapping kit offers a great alternative to the other units aforementioned and fills many of the gaps in functionality of these systems.
In the race to develop self-driving technology, Chinese Internet giant Baidu unveiled its 50+ partners in an open source development program, revised its timeline for introducing autonomous driving capabilities on open city roads, described the Project Apollo consortium and its goals, and declared Apollo to be the ‘Android of the autonomous driving industry’.
At a developer's conference last week in Beijing, Baidu described its plans and timetable for its self-driving car technology. It will start test-driving in restricted environments immediately, before gradually introducing fully autonomous driving capabilities on highways and open city roads by 2020. Baidu's goal is to get those vehicles on the roads in China, the world's biggest auto market, with the hope that the same technology, embedded in exported Chinese vehicles, can then conquer the United States. To do so, Baidu has compiled a list of cooperative partners, a consortium of 50+ public and private entities, and named it Apollo, after NASA's massive Apollo moon-landing program.
Project Apollo
The program is making its autonomous car software open source in the same way that Google released its Android operating system for smartphones. By encouraging companies to build upon the system and share their results, it hopes to overtake rivals such as Google/Waymo, Tencent, Alibaba and others researching self-driving technology.
The Apollo platform consists of a core software stack, a number of cloud services, and self-driving vehicle hardware such as GPS, cameras, lidar, and radar.
The software currently available to outside developers is relatively simple: it can record the behavior of a car being driven by a person and then play that back in autonomous mode. This November, the company plans to release perception capabilities that will allow Apollo cars to identify objects in their vicinity. This will be followed by planning and localization capabilities, and a driver interface.
The cloud services being developed by Baidu include mapping services, a simulation platform, a security framework, and Baidu’s DuerOS voice-interface technology.
Members of the project include Chinese automakers Chery, Dongfeng Motor, Foton, Nio, Yiqi and FAW Group. Tier 1 members include Continental, Bosch, Intel, Nvidia, Microsoft and Velodyne. Other partners include Chinese universities, governmental agencies, Autonomous Stuff, TomTom, Grab and Ford. The full list of members can be seen here.
Quoting from Bloomberg News regarding the business aspect of Project Apollo:
China has set a goal for 10 to 20 percent of vehicles to be highly autonomous by 2025, and for 10 percent of cars to be fully self-driving in 2030. Didi Chuxing, the ride-sharing app that beat Uber in China, is working on its own product, as are several local automakers. It’s too early to tell which will ultimately succeed though Baidu’s partnership approach is sound, said Marie Sun, an analyst with Morningstar Investment Service.
“This type of technology needs cooperation between software and hardware from auto-manufacturers so it’s not just Baidu that can lead this,” she said. If Baidu spins off the car unit, “in the longer term, Baidu should maintain a major shareholder position so they can lead the growth of the business.”
Baidu and Apollo have a significant advantage over Google's Waymo: Baidu has a presence in the United States, whereas Alphabet has none in China because Google closed down its search site in 2010 rather than give in to China's internet censorship.
Strategic Issue
According to the Financial Times, “autonomous vehicles pose an existential threat [to global car manufacturers]. Instead of owning cars, consumers in the driverless age will simply summon a robotic transportation service to their door. One venture capitalist says auto executives have come to him saying they know they are “screwed”, but just want to know when it will happen.”
This desperation has prompted a string of big acquisitions and joint ventures amongst competing providers including those in China. Citing just a few:
Last year GM paid $1bn for Cruise, a self-driving car start-up.
Uber paid $680m for Otto, an autonomous trucking company that was less than a year old.
In March, Intel spent $15bn to buy Israel’s Mobileye, which makes self-driving sensors and software.
Baidu acquired Raven Tech, an Amazon Echo competitor; 8i, an augmented reality hologram startup; Kitt, a conversational language engine; and XPerception, a vision systems developer.
Tencent invested in mapping provider Here and acquired 5% of Tesla.
Alibaba announced that it is partnering with Chinese Big 4 carmaker SAIC in their self-driving effort.
China Network
Baidu’s research team in Silicon Valley is pivotal to their goals. Baidu was one of the first of the Chinese companies to set up in Silicon Valley, initially to tap into SV's talent pool. Today it is the center of a “China network” of almost three dozen firms, through investments, acquisitions and partnerships.
Baidu is rapidly moving forward from the SV center:
It formed a self-driving car sub-unit in April which now employs more than 100 researchers and engineers.
It partnered with chipmaker Nvidia.
It acquired vision systems startup XPerception.
It has begun testing its autonomous vehicles in China and California.
Regarding XPerception, Gartner research analyst Michael Ramsey said in a CNBC interview:
“XPerception has expertise in processing and identifying images, an important part of the sensing for autonomous vehicles. The purchase may help push Baidu closer to the leaders, but it is just one piece.”
XPerception is just one of many Baidu puzzle pieces intended to bring talent and intellectual property to the Apollo project. It acquired Raven Tech and Kitt AI to gain conversational transaction processing. It acquired 8i, an augmented reality hologram startup, to add AR — which many expect to be crucial in future cars — to the project. And it suggested that the acquisition spree will continue as needed.
Bottom Line
China has set a goal for 10 to 20 percent of vehicles to be highly autonomous by 2025, and for 10 percent of cars to be fully self-driving in 2030 and Baidu wants to provide the technology to get those vehicles on the roads in China with the hope that the same technology, embedded in exported Chinese vehicles, can then conquer the United States. It seems well poised to do so.
In this episode, MeiXing Dong conducts interviews at the 2017 Midwest Speech and Language Days workshop in Chicago. She talks with Michael White of Ohio State University about question interpretation in a dialogue system; Dmitriy Dligach of Loyola University Chicago about extracting patient timelines from doctor’s notes; and Denis Newman-Griffiths of Ohio State University about connecting words and phrases to relevant medical topics.
Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.
This week we’re featuring Mike’s interview with Nick Kohut, Co-Founder and CEO of Dash Robotics.
Nick is a former robotics postdoc at Stanford and received his PhD in Control Systems from UC Berkeley. At Dash Robotics, Nick handles team-building and project management.
You can find all the interviews here. We’ll be posting one per week on Robohub.