In the constantly changing landscape of today’s global digital workspace, AI’s presence grows in almost every industry. Retail giants like Amazon and Alibaba are using algorithms written by machine learning software to add value to the customer experience. Machine learning is also prevalent in the new Service Robotics world as robots transition from blind, dumb and caged to mobile and perceptive.
Competition is particularly focused between the US and China even though other countries and global corporations have large AI programs as well. The competition is real, fierce and dramatic. Talent is hard to find and costly. It’s a complex field that few fully understand, consequently the talent pool is limited. Grabs of key players and companies headline the news every few days. “Apple hires away Google’s chief of search and AI.” “Amazon acquires AI cybersecurity startup.” “IBM invests millions into MIT AI research lab.” “Oracle acquires Zenedge.” “Ford acquires auto tech startup Argo AI.” “Baidu hires three world-renowned artificial intelligence scientists.”
Media, partly from the complexity of the subject, and partly from lack of knowledge, frighten people with scare headlines about misuse and autonomous weaponry. They exaggerate the competition into a hotly contested war for mastery of the field. It’s not really a “war” but it is dramatic and it’s playing out right now on many levels: immigration law, intellectual property transgressions, trade war fears, labor cost and availability challenges, and unfair competitive practices as well as technological breakthroughs and lower costs enabling experimentation and testing.
Two recent trends have sparked widespread use of machine learning: the availability of massive amounts of training data, and powerful and efficient parallel computing. GPUs are parallel processors and are used to train these deep neural networks. GPUs do so in less time, using far less datacenter infrastructure than non-parallel-processing super-computers.
Service and mobile robots often need to have all their computing power onboard as compared to stationary robots with control systems in separate nearby boxes. Sometimes onboard computing involves multiple processors; other times it necessitates super-computing power such as offered by chip makers that offer parallel processing and super-computer speeds. Nvidia’s Jetson chip, Isaac lab, and toolset are an example.
Nvidia
The recent Nvidia GPU Developers Conference held in San Jose last month highlighted Nvidia’s goal to capture the robotics AI market. They’ve set up an SDK and lab to help robotics companies capture and learn from the amount of data they are processing as they go about their tasks in mobility and vision processing.
Nvidia’s Jetson GPU, SDK, toolset and simulation platform are designed to help roboticists build and test robotics applications and simultaneously manage all the various onboard processes such as perception, navigation and manipulation. As a demonstration of the breath of capabilities in their toolset, Nvidia had a delivery robot to cart around objects at the show.
Nvidia is offering libraries, SDK, APIs, an open source deep learning accelerator, and other tools to encourage the use by robot makers for them to incorporate Nvidia chips into their products. Nvidia sees this as a future source of revenue. Right now it is mostly all research and experimentation.
Examples of deep learning in robotics
In a recent CBInsights graphic categorizing the 2018 AI 100, 12 companies were highlighted in the robotics and auto technology sectors. Note from the Venn Diagram that not all AI companies are involved with robotics (in fact, most aren’t – there were 2,000+ startups in the pool of companies from which the 100 were chosen). The same is true for robotics.
- Robotics:
- Vicarious
- Kindred (CA)
- Anki
- UBTech (CN)
- Brain Corp
- Neurala
- CloudMinds (CN)
- Auto Tech:
Here are four use cases of robot companies using AI chips in their products:
- Cobalt Robotics – Says CEO and Co-founder Travis Deyle, “Cobalt uses a high-end NVidia GPU (a 1080 variant) directly on the robot. We do a lot of processing locally (e.g. anomaly detection, person detection, etc) using a host of libraries: CUDA, TensorFlow, and various computer vision libraries. The algorithms running on the robot are just the tip of the iceberg. The on-robot detectors and classifiers are tuned to be very sensitive; upon detection, data is transmitted to the internet and runs through an extensive cloud-based machine learning pipeline and ultimately flags a remote human specialist for additional input and high-level decision making. The cloud-based pipeline also makes use of deep-learning processing power, which is likely powered by NVidia as well.”
- Bossa Nova Robotics – Walmart is partnering with San Francisco-based robotics Bossa Nova on robots that roam the grocery and health products aisles of Walmart stores, auditing shelves and then sending data back to employees to ensure that missing items are restocked, as well as locating incorrect prices and wrong or missing labels. Bossa Nova’s Walmart robots house three Nvidia GPUs: one for navigation and mapping; another for perception and image stitching (it’s viewing 6′ of shelving at 2 mph); and for computing and analyzing what it’s seeing and turning that info into actionable restocking reports.
- Fetch Robotics – Fetch Robotics’ automated material transports and Fetch’s new data survey line of AMRs, all, in addition to navigation, collision avoidance and mapping, collect data continuously and consistently. When the robots recharge themselves, all the stored collected data is uploaded to the cloud for post-processing and analytics.
- TUSimple (CN) – Beijing-based TuSimple’s truck driving technology is focused on the middle mile, ie, the need for transporting container boxes from one hub to another. Along the way TUSimple trucks are able to detect and track objects at distances of greater than 300 meters through advanced sensor fusion that combines data from multiple cameras using decimeter-level localization technology. Simultaneously, the truck’s decision-making system dynamically adapts to road conditions including changing lanes and adjusting driving speeds. TuSimple uses NVIDIA GPUs, NVIDIA DRIVE PX 2, Jetson TX2, CUDA, TensorRT and cuDNN in its autonomous driving solution.
The China factor
Twelve years ago, as a national long-term strategic goal, China crafted 5-year plans with specific goals to encourage the use of robots in manufacturing to enhance quality and reduce the need for unskilled labor, and to establish the manufacture of robots in-country to reduce the reliance on foreign suppliers. After three successive well funded and fully incentivized 5-year robotics plans, one can easily see the transformation: robot and component manufacturers have grown from fewer than 10 to more than 700 while companies using robots in their manufacturing and material handling process have grown similarly.
[NOTE: During the same period, America implemented various manufacturing initiatives involving robotics, however none were comparably funded or, more importantly, continuously funded over time.]
Recently China turned its focus to artificial intelligence. Specifically, they’ve set out a three-pronged plan to catch up by 2020, achieve mid-term parity in autonomous vehicles, image recognition and, perhaps, simultaneous translation by 2025, and lead the world in AI and machine learning by 2030.
Western companies doing business in China have been plagued by intellectual property thievery, copying and reverse engineering, and heavy-handed partnerships and joint ventures where IP must be given to the Chinese venture. Steve Dickinson, a lawyer with Harris | Bricken, a Seattle law firm whose slogan is “Tough Markets; Bold Lawyers,” wrote:
“With respect to appropriating the technology and then selling it back into the developed market from which it came: that is of course the Chinese strategy. It is the strategy of businesses in every developing country. The U.S. followed this approach during the entire 19th and early 20th centuries. Japan and Korea and Taiwan did it with great success in the post WWII era. That is how technical progress is made.”
“It is clear that appropriating foreign AI technology is the goal of every Chinese company operating in this sector [robotics, e-commerce, logistics and manufacturing]. For that reason, all foreign entities that work with Chinese companies in any way must be aware of the significant risk and must take the steps required to protect themselves.”
What is really clear is that where data in large quantity is available, as in China, and where speed is normal and privacy is nil, as in China, AI techniques such as machine and deep learning can thrive and achieve remarkable results at breakneck speed. That’s what is happening right now in China.
Bottom line:
Growth in the service robotics sector is still a promise more than a reality and there is a pressing need to deliver on those promises. We have seen tremendous progress on processors, sensors, cameras and communications but so far the integration is lacking. One roboticist characterized the integration of all that data as a need for a “reality sensor”, i.e., a higher-level indicator of what is being seen or processed. If the sensors pick up a series of pixels that are interpreted to be a person, and the processing determines its motion to be intersecting with your robot, it would be helpful to know whether it’s a pedestrian, a policeman, a fireman, a sanitation worker, a construction worker, a surveyor, etc. That information would help refine the prediction and your actions. It would add reality to image processing and visual perception.
Even as the ratio of development in hardware to software shifts more toward software, there are still many challenges to overcome. Henrik Christensen, the director of the Institute for Contextual Robotics at the University of California San Diego, cited a few of those challenges:
- Better end-effectors / hands. We still only have very limited capability hands and they are WAY too expensive
- The user interfaces for most robots are still very limited, eg, different robots have different chargers
- The cost of integrating systems is very high. We need much better plug-n-play systems
- We see lots of use of AI / deep learning but in most cases without performance guarantees; not a viable long-term solution until things improve
One often forgets the science involved in robotics, embedded AI, and the many challenges remaining until we have a functional fully-capable, fully-interactive service robot.