Page 338 of 346
1 336 337 338 339 340 346

Robotics industry growing faster than expected

Two reputable research resources are reporting that the robotics industry is growing more rapidly than expected. BCG (Boston Consulting Group) is conservatively projecting that the market will reach $87 billion by 2025; Tractica, incorporating the robotic and AI elements of the emerging self-driving industry, is forecasting the market will reach $237 billion by 2022.

Both research firms acknowledge that yesterday’s robots — which were blind, big, dangerous and difficult to program and maintain — are being replaced and supplemented with newer, more capable ones. Today's new, and future robots, will have voice and language recognition, access to super-fast communications, data and libraries of algorithms, learning capability, mobility, portability and dexterity. These new precision robots can sort and fill prescriptions, pick and pack warehouse orders, sort, inspect, process and handle fruits and vegetables, plus a myriad of other industrial and non-industrial tasks, most faster than humans, yet all the while working safely along side them.

Boston Consulting Group (BCG)

Gaining Robotic Advantage, June 2017, 13 pages, free

BCG suggests that business executives be aware of ways robots are changing the global business landscape and think and act now. They see robotics-fueled changes coming in retail, logistics, transportation, healthcare, food processing, mining and agriculture.

BCG cites the following drivers:

  • Private investment in the robotic space has continued to amaze with exponential year-over-year funding curves and sensational billion dollar acquisitions.
  • Prices continue to fall on robots, sensors, CPUs and communications while capabilities continue to increase.
  • Robot programming is being transformed by easier interfaces, GUIs and ROS.
  • The prospect of a self-driving vehicles industry disrupting transportation is propelling a talent grab and strategic acquisitions by competing international players with deep pockets.
  • 40% of robotic startups have been in the consumer sector and will soon augment humans in high-touch fields such as health and elder care.

 BCG also cites the following example of paying close attention to gain advantage:

“Amazon gained a first-mover advantage in 2012 when it bought Kiva Systems, which makes robots for warehouses. Once a Kiva customer, Amazon acquired the robot maker to improve the productivity and margins of its network of warehouses and fulfillment centers. The move helped Amazon maintain its low costs and expand its rapid delivery capabilities. It took five years for a Kiva alternative to hit the market. By then, Amazon had a jump on its rivals and had developed an experienced robotics team, giving the company a sustainable edge.”

Tractica

Robotics Market Forecast – June 2017, 26 pages, $4,200
Drones for Commercial Applications – June 2017, 196 pages, $4,200
AI for Automotive Applications – May 2017, 63 pages, $4,200
Consumer Robotics – May 2017, 130 pages, $4,200

The key story is that industrial robotics, the traditional pillar of the robotics market, dominated by Japanese and European robotics manufacturers, has given way to non-industrial robot categories like personal assistant robots, UAVs, and autonomous vehicles, with the epicenter shifting toward Silicon Valley, which is now becoming a hotbed for artificial intelligence (AI), a set of technologies that are, in turn, driving a lot of the most significant advancements in robotics. Consequently, Tractica forecasts that the global robotics market will grow rapidly between 2016 and 2022, with revenue from unit sales of industrial and non-industrial robots rising from $31 billion in 2016 to $237.3 billion by 2022.  The market intelligence firm anticipates that most of this growth will be driven by non-industrial robots.

Tractica is headquartered in Boulder and analyzes global market trends and applications for robotics and related automation technologies within consumer, enterprise, and industrial marketplaces and related industries.

General Research Reports

  • Global autonomous mobile robots marketJune 2017, 95 pages, TechNavio, $2,500
    TechNavio forecasts that the global autonomous mobile robots market will grow at a CAGR of more than 14% through 2021.
  • Global underwater exploration robotsJune 2017, 70 pages, TechNavio, $3,500
    TechNavio forecasts that the global underwater exploration robots market will grow at a CAGR of 13.92 % during the period 2017-2021.
  • Household vacuum cleaners market, March 2017, 134 pages, Global Market Insights, $4,500
    Global Market Insights forecasts that household vacuum cleaners market size will surpass $17.5 billion by 2024 and global shipments are estimated to exceed 130 million units by 2024, albeit at a low 3.0% CAGR. Robotic vacuums show a slightly higher growth CAGR.
  • Global unmanned surface vehicle market, June 2017, Value Market Research, $3,950
    Value Market Research analyzed drivers (security and mapping) versus restraints such as AUVs and ROVs and made their forecasts for the period 2017-2023.
  • Innovations in Robotics, Sensor Platforms, Block Chain, and Artificial Intelligence for Homeland Security, May 2017, Frost & Sullivan, $6,950
    This Frost & Sullivan report covers recent developments such as co-bots for surveillance applications, airborne sensor platforms for border security, blockchain tech, AI as first responder, and tech for detecting nuclear threats.
  • Top technologies in advanced manufacturing and automation, April 2017, Frost & Sullivan, $4,950
    This Frost & Sullivan report focuses on exoskeletons, metal and nano 3D printing, co-bots and agile robots – all of which are in the top 10 technologies covered.
  • Mobile robotics market, December 2016, 110 pages, Zion Market Research, $4,199
    Global mobile robotics market will reach $18.8 billion by end of 2021, growing at a CAGR of slightly above 13.0% between 2017 and 2021.
  • Unmanned surface vehicle (USV) market, May 2017, MarketsandMarkets, $5,650
    MarketsandMarkets forecasts the unmanned surface vehicle (USV) market to grow from $470.1 Million in 2017 to $938.5 Million by 2022, at a CAGR of 14.83%.

Agricultural Research Reports

  • Global agricultural robots market, May 2017, 70 pages, TechNavio, $2,500
    Forecasts the global agricultural robots market will grow steadily at a CAGR of close to 18% through 2021.
  • Agriculture robots market, June 2017, TMR Research, $3,716
    Robots are poised to replace agricultural hands. They can pluck fruits, sow and reap crops, and milk cows. They carry out the tasks much faster and with a great degree of accuracy. This coupled with mandates on higher minimum pay being levied in most countries, have spelt good news for the global market for agriculture robots.
  • Agricultural Robots, December 2016, 225 pages, Tractica, $4,200
    Forecasts that shipments of agricultural robots will increase from 32,000 units in 2016 to 594,000 units annually in 2024 and that the market is expected to reach $74.1 billion in annual revenue by 2024. Report, done in conjunction with The Robot Report, profiles over 165 companies involved in developing robotics for the industry.

Second edition of Springer Handbook of Robotics

The Second Edition of the award-winning Springer Handbook of Robotics edited by Bruno Siciliano and Oussama Khatib has recently been published. The contents of the first edition have been restructured to achieve four main objectives: the enlargement of foundational topics for robotics, the enlightenment of design of various types of robotic systems, the extension of the treatment on robots moving in the environment, and the enrichment of advanced robotics applications. Most previous chapters have been revised, fifteen new chapters have been introduced on emerging topics, and a new generation of authors have joined the handbook’s team. Like for the first edition, a truly interdisciplinary approach has been pursued in line with the expansion of robotics across the boundaries with related disciplines. Again, the authors have been asked to step outside of their comfort zone, as the Editorial Board have teamed up authors who never worked together before.

No doubt one of the most innovative elements is the inclusion of multimedia content to leverage the valuable written content inside the book. Under the editorial leadership of Torsten Kröger, a web portal has been created to host the Multimedia Extension of the book, which serves as a quick one-stop shop for more than 700 videos associated with the specific chapters. In addition, video tutorials have been created for each of the seven parts of the book, which benefit everyone from PhD students to seasoned robotics experts who have been in the field for years. A special video related to the contents of the first chapter shows the journey of robotics with the latest and coolest developments in the last 15 years. As publishing explores new interactive technologies, an App has been made available in the Google/IOS stores to introduce an additional multimedia layer to the reader’s experience. With the app, readers can use the camera on their smartphone or tablet, hold it to a page containing one or more special icons, and produce an augmented reality on the screen, watching videos as they read along the book.


For more information on the book, please visit the Springer Handbook website.

The Multimedia Portal offers free access to more than 700 accompanying videos. In addition, a Multimedia App is now downloadable: AppStore and GooglePlay for smartphones and tablets, allowing you to easily access multimedia content while reading the book.

New Horizon 2020 robotics projects, 2016: CYBERLEGs++

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). 

Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features CYBERLEGs++: The CYBERnetic LowEr-Limb CoGnitive Ortho-prosthesis Plus Plus.

Objectives

The goal of CYBERLEGs++ is to validate the technical and economic viability of the powered robotic ortho-prosthesis developed within the FP7-ICT-CYBERLEGs project. The aim is to enhance/restore the mobility of transfemoral amputees and to enable them to perform locomotion tasks such as ground-level walking, walking up and down slopes, climbing/descending stairs, standing up, sitting down and turning in scenarios of real life. Restored mobility will allow amputees to perform physical activity thus counteracting physical decline and improving the overall health status and quality of life.


Expected Impact

By demonstrating in an operational environment (TRL=7) – from both the technical and economic viability view point – a modular robotics technology for healthcare, with the ultimate goal of fostering its market exploitation CYBERLEGs Plus Pus will have an impact on:

Society: CLs++ technology will contribute to increase the mobility of dysvascular amputees, and, more generally, of disabled persons with mild lower-limb impairments;
Science and technology: CLs++ will further advance the hardware and software modules of the ortho-prosthesis developed within the FP7 CYBERLEGs project and validate its efficacy through a multi-centre clinical study;
Market: CLs++ will foster the market exploitation of high-tech robotic systems and thus will promote the growth of both a robotics SME and a large healthcare company.

Partners
SCUOLA SUPERIORE SANT’ANNA (SSSA)
UNIVERSITÉ CATHOLIQUE DE LOUVAIN (UCL)
VRIJE UNIVERSITEIT BRUSSEL (VUB)
UNIVERZA V LJUBLJANI (UL)
FONDAZIONE DON CARLO GNOCCHI (FDG)
ÖSSUR (OSS)
IUVO S.R.L. (IUVO)

Coordinator
Prof. Nicola Vitiello, The BioRobotics Institute
Scuola Superiore Sant’Anna, Pisa, Italy
nicola.vitiello@santannapisa.it

Project website
www.cyberlegs.org

Watch all EU-projects videos

If you enjoyed reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Rapid outdoor/indoor 3D mapping with a Husky UGV

by Nicholas Charron

The need for fast, accurate 3D mapping solutions has quickly become a reality for many industries wanting to adopt new technologies in AI and automation. New applications requiring these 3D mapping platforms include surveillance, mining, automated measurement & inspection, construction management & decommissioning, and photo-realistic rendering. Here at Clearpath Robotics, we decided to team up with Mandala Robotics to show how easily you can implement 3D mapping on a Clearpath robot.

3D Mapping Overview

3D mapping on a mobile robot requires Simultaneous Localization and Mapping (SLAM), for which there are many different solutions available. Localization can be achieved by fusing many different types of pose estimates. Pose estimation can be done using combinations of GPS measurements, wheel encoders, inertial measurement units, 2D or 3D scan registration, optical flow, visual feature tracking and others techniques. Mapping can be done simultaneously using the lidars and cameras that are used for scan registration and for visual position tracking, respectively. This allows a mobile robot to track its position while creating a map of the environment. Choosing which SLAM solution to use is highly dependent on the application and the environment to be mapped. Although many 3D SLAM software packages exist and cannot all be discussed here, there are few 3D mapping hardware platforms that offer full end-to-end 3D reconstruction on a mobile platform.

Existing 3D Mapping Platforms

We will briefly highlight some more popular alternatives of commercialized 3D mapping platforms that have one or many lidars, and in some cases optical cameras, for point cloud data collection. It is important to note that there are two ways to collect a 3D point cloud using lidars:

1. Use a 3D lidar which consists of one device with multiple stacked horizontally laser beams
2. Tilt or rotate a 2D lidar to get 3D coverage

Tilting of a 2D lidar typically refers to back-and-forth rotating of the lidar about its horizontal plane, while rotating usually refers to continuous 360 degree rotation of a vertically or horizontally mounted lidar.

Example 3D Mapping Platforms: 1. MultiSense SL (Left) by Carnegie Robotics, 2. 3DLS-K (Middle) by Fraunhofer IAIS Institute, 3. Cartographer Backpack (Right) by Google.

1. MultiSense SL

The MultiSense SL was developed by Carnegie Robotics and provides a compact and lightweight 3D data collection unit for researchers. The unit has a tilting Hokuyo 2D lidar, a stereo camera, LED lights, and is pre-calibrated for the user. This allows for the generation of coloured point clouds. This platform comes with a full software development kit (SDK), open source ROS software, and is the sensor of choice for the DARPA Robotics Challenge for humanoid robots.

2. 3DLS-K

The 3DLS-K is a dual-tilting unit made by Fraunhofer IAIS Institute with the option of using SICK LMS-200 or LMS-291 lidars. Fraunhofer IAIS also offers other configurations with continuously rotating 2D SICK or Hokuyo lidars. These systems allow for the collection of non-coloured point clouds. With the purchase of these units, a full application program interface (API) is available for configuring the system and collecting data.

3. Cartographer Backpack

The Cartographer Backpack is a mapping unit with two static Hokuyo lidars (one horizontal and one vertical) and an on-board computer. Google released cartographer software as an open source library for performing 3D mapping with multiple possible sensor configurations. The Cartographer Backpack is an example of a possible configuration to map with this software. Cartographer allows for integration of multiple 2D lidars, 3D lidars, IMU and cameras, and is also fully supported in ROS. Datasets are also publicly available for those who want to see mapping results in ROS.

Mandala Mapping – System Overview

Thanks to the team at Mandala Robotics, we got our hands on one of their 3D mapping units to try some mapping on our own. This unit consists of a mount for a rotating vertical lidar, a fixed horizontal lidar, as well as an onboard computer with an Nvidia GeForce GTX 1050 Ti GPU. The horizontal lidar allows for the implementation of 2D scan registration as well as 2D mapping and obstacle avoidance. The vertical rotating lidar is used for acquiring the 3D point cloud data. In our implementation, real-time SLAM was performed solely using 3D scan registration (more on this later) specifically programmed for full utilization of the onboard GPU. The software used to implement this mapping can be found on the mandala-mapping github repository.

Scan registration is the process of combining (or stitching) together two subsequent point clouds (either in 2D or 3D) to estimate the change in pose between the scans. This results in motion estimates to be used in SLAM and also allows a new point cloud to be added to an existing in order to build a map. This process is achieved by running iterative closest point (ICP) between the two subsequent scans. ICP performs a closest neighbour search to match all points from the reference scan to a point on the new scan. Subsequently, optimization is performed to find rotation and translation matrices that minimise the distance between the closest neighbours. By iterating this process, the result converges to the true rotation and translation that the robot underwent between the two scans. This is the process that was used for 3D mapping in the following demo.

Mandala Robotics has also released additional examples of GPU computing tasks useful for robotics and SLAM. These examples can be found here.

Mandala Mapping Results

The following video shows some of our results from mapping areas within the Clearpath office, lab and parking lot. The datasets collected for this video can be downloaded here.

The Mandala Mapping software was very easy to get up and running for someone with basic knowledge in ROS. There is one launch file which runs the Husky base software as well as the 3D mapping. Initiating each scan can be done by sending a simple scan request message to the mapping node, or by pressing one button on the joystick used to drive the Husky. Furthermore, with a little more ROS knowledge, it is easy to incorporate autonomy into the 3D mapping. Our forked repository shows how a short C++ script can be written to enable constant scan intervals while navigating in a straight line. Alternatively, one could easily incorporate 2D SLAM such as gmapping together with the move_base package in order to give specific scanning goals within a map.

Why use Mandala Mapping on your robot?

If you are looking for a quick and easy way to collect 3D point clouds, with the versatility to use multiple lidar types, then this system is a great choice. The hardware work involved with setting up the unit is minimal and well documented, and it is preconfigured to work with your Clearpath Husky. Therefore, you can be up and running with ROS in a few days! The mapping is done in real time, with only a little lag time as your point cloud size grows, and it allows you to visualize your map as you drive.

The downside to this system, compared to the MultiSense SL for example, is that you cannot yet get a coloured point cloud since no cameras have been integrated into this system. However, Mandala Robotics is currently in the beta testing stage for a similar system with an additional 360 degree camera. This system uses the Ladybug5 and will allow RGB colour to be mapped to each of the point cloud elements. Keep an eye out for a future Clearpath blogs in case we get our hands on one of these systems! All things considered, the Mandala Mapping kit offers a great alternative to the other units aforementioned and fills many of the gaps in functionality of these systems.

The post Rapid Outdoor/Indoor 3D Mapping with a Husky UGV appeared first on Clearpath Robotics.

Baidu’s self-driving tech plans revealed

In the race to develop self-driving technology, Chinese Internet giant Baidu unveiled its 50+ partners in an open source development program, revised its timeline for introducing autonomous driving capabilities on open city roads, described the Project Apollo consortium and its goals, and declared Apollo to be the ‘Android of the autonomous driving industry’.

At a developer's conference last week in Beijing, Baidu described its plans and timetable for its self-driving car technology. It will start test-driving in restricted environments immediately, before gradually introducing fully autonomous driving capabilities on highways and open city roads by 2020. Baidu's goal is to get those vehicles on the roads in China, the world's biggest auto market, with the hope that the same technology, embedded in exported Chinese vehicles, can then conquer the United States. To do so, Baidu has compiled a list of cooperative partners, a consortium of 50+ public and private entities, and named it Apollo, after NASA's massive Apollo moon-landing program. 

Project Apollo

The program is making its autonomous car software open source in the same way that Google released its Android operating system for smartphones. By encouraging companies to build upon the system and share their results, it hopes to overtake rivals such as Google/Waymo, Tencent, Alibaba and others researching self-driving technology. 

MIT Technology Review provided a description of the open source Apollo project:

The Apollo platform consists of a core software stack, a number of cloud services, and self-driving vehicle hardware such as GPS, cameras, lidar, and radar.

The software currently available to outside developers is relatively simple: it can record the behavior of a car being driven by a person and then play that back in autonomous mode. This November, the company plans to release perception capabilities that will allow Apollo cars to identify objects in their vicinity. This will be followed by planning and localization capabilities, and a driver interface.

The cloud services being developed by Baidu include mapping services, a simulation platform, a security framework, and Baidu’s DuerOS voice-interface technology.

Members of the project include Chinese automakers Chery, Dongfeng Motor, Foton, Nio, Yiqi and FAW Group. Tier 1 members include Continental, Bosch, Intel, Nvidia, Microsoft and Velodyne. Other partners include Chinese universities, governmental agencies, Autonomous Stuff, TomTom, Grab and Ford. The full list of members can be seen here.

Quoting from Bloomberg News regarding the business aspect of Project Apollo:

China has set a goal for 10 to 20 percent of vehicles to be highly autonomous by 2025, and for 10 percent of cars to be fully self-driving in 2030. Didi Chuxing, the ride-sharing app that beat Uber in China, is working on its own product, as are several local automakers. It’s too early to tell which will ultimately succeed though Baidu’s partnership approach is sound, said Marie Sun, an analyst with Morningstar Investment Service.

“This type of technology needs cooperation between software and hardware from auto-manufacturers so it’s not just Baidu that can lead this,” she said. If Baidu spins off the car unit, “in the longer term, Baidu should maintain a major shareholder position so they can lead the growth of the business.”

Baidu and Apollo have a significant advantage over Google's Waymo: Baidu has a presence in the United States, whereas Alphabet has none in China because Google closed down its search site in 2010 rather than give in to China's internet censorship.

Strategic Issue

According to the Financial Times, “autonomous vehicles pose an existential threat [to global car manufacturers]. Instead of owning cars, consumers in the driverless age will simply summon a robotic transportation service to their door. One venture capitalist says auto executives have come to him saying they know they are “screwed”, but just want to know when it will happen.” 

This desperation has prompted a string of big acquisitions and joint ventures amongst competing providers including those in China. Citing just a few:

  • Last year GM paid $1bn for Cruise, a self-driving car start-up.
  • Uber paid $680m for Otto, an autonomous trucking company that was less than a year old.
  • In March, Intel spent $15bn to buy Israel’s Mobileye, which makes self-driving sensors and software.
  • Baidu acquired Raven Tech, an Amazon Echo competitor; 8i, an augmented reality hologram startup; Kitt, a conversational language engine; and XPerception, a vision systems developer.
  • Tencent invested in mapping provider Here and acquired 5% of Tesla.
  • Alibaba announced that it is partnering with Chinese Big 4 carmaker SAIC in their self-driving effort.

China Network

Baidu’s research team in Silicon Valley is pivotal to their goals. Baidu was one of the first of the Chinese companies to set up in Silicon Valley, initially to tap into SV's talent pool. Today it is the center of a “China network” of almost three dozen firms, through investments, acquisitions and partnerships. 

Baidu is rapidly moving forward from the SV center: 

  • It formed a self-driving car sub-unit in April which now employs more than 100 researchers and engineers. 
  • It partnered with chipmaker Nvidia.
  • It acquired vision systems startup XPerception.
  • It has begun testing its autonomous vehicles in China and California. 

Regarding XPerception, Gartner research analyst Michael Ramsey said in a CNBC interview:

“XPerception has expertise in processing and identifying images, an important part of the sensing for autonomous vehicles. The purchase may help push Baidu closer to the leaders, but it is just one piece.”

XPerception is just one of many Baidu puzzle pieces intended to bring talent and intellectual property to the Apollo project. It acquired Raven Tech and Kitt AI to gain conversational transaction processing. It acquired 8i, an augmented reality hologram startup, to add AR — which many expect to be crucial in future cars — to the project. And it suggested that the acquisition spree will continue as needed.

Bottom Line

China has set a goal for 10 to 20 percent of vehicles to be highly autonomous by 2025, and for 10 percent of cars to be fully self-driving in 2030 and Baidu wants to provide the technology to get those vehicles on the roads in China with the hope that the same technology, embedded in exported Chinese vehicles, can then conquer the United States. It seems well poised to do so.

Robots Podcast #238: Midwest Speech and Language Days 2017 Posters, with Michael White, Dmitriy Dligach and Denis Newman-Griffiths



In this episode, MeiXing Dong conducts interviews at the 2017 Midwest Speech and Language Days workshop in Chicago. She talks with Michael White of Ohio State University about question interpretation in a dialogue system; Dmitriy Dligach of Loyola University Chicago about extracting patient timelines from doctor’s notes; and Denis Newman-Griffiths of Ohio State University about connecting words and phrases to relevant medical topics.

Links

Udacity Robotics video series: Interview with Nick Kohut from Dash Robotics


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Nick Kohut, Co-Founder and CEO of Dash Robotics.

Nick is a former robotics postdoc at Stanford and received his PhD in Control Systems from UC Berkeley. At Dash Robotics, Nick handles team-building and project management.

You can find all the interviews here. We’ll be posting one per week on Robohub.

China’s e-commerce dynamo JD makes deliveries via mobile robots

China’s second-biggest e-commerce company, JD.com (Alibaba is first), is testing mobile robots to make deliveries to its customers, and imagining a future with fully unmanned logistics systems.

 Story idea and images courtesy of RoboticsToday.com.au.

On the last day of a two-week-long shopping bonanza that recorded sales of around $13 billion, some deliveries were made using mobile robots designed by JD. It’s the first time that the company has used delivery robots in the field. The bots delivered packages to multiple Beijing university campuses such as Tsinghua University and Renmin University. 

JD has been testing delivery robots since November last year. At that time, the cost of a single robot was almost $88,000.

They have been working on lowering the cost and increasing the capabilities since then. The white, four-wheeled UGVs can carry five packages at once and travel 13 miles on a charge. They can climb up a 25° incline and find the shortest route from warehouse to destination.

Once it reaches its destination, the robot sends a text message to notify the recipient of the delivery. Users can accept the delivery through face-recognition technology or by using a code.

The UGVs now cost $7,300 per robotic unit which JD figures can reduce delivery costs from less than $1 for a human delivery to about 20 cents for a robot delivery.

JD is also testing the world’s largest drone-delivery network, including flying drones carrying products weighing as much as 2,000 pounds.

“Our logistics systems can be unmanned and 100% automated in 5 to 8 years,” said Liu Qiangdong, JD’s chairman.

Simulated car demo using ROS Kinetic and Gazebo 8

By Tully Foote

We are excited to show off a simulation of a Prius in Mcity using ROS Kinetic and Gazebo 8. ROS enabled the simulation to be developed faster by using existing software and libraries. The vehicle’s throttle, brake, steering, and transmission are controlled by publishing to a ROS topic. All sensor data is published using ROS, and can be visualized with RViz.

We leveraged Gazebo’s capabilities to incorporate existing models and sensors.
The world contains a new model of Mcity and a freeway interchange. There are also models from the gazebo models repository including dumpsters, traffic cones, and a gas station. On the vehicle itself there is a 16 beam lidar on the roof, 8 ultrasonic sensors, 4 cameras, and 2 planar lidar.

The simulation is open source and available at on GitHub at osrf/car_demo. Try it out by installing nvidia-docker and pulling “osrf/car_demo” from Docker Hub. More information about building and running is available in the README in the source repository.

Talking Machines: Bias variance dilemma for humans and the arm farm, with Jeff Dean

In episode four of season three Neil introduces us to the ideas behind the bias variance dilemma (and how how we can think about it in our daily lives). Plus, we answer a listener question about how to make sure your neural networks don’t get fooled. Our guest for this episode is Jeff Dean,  Google Senior Fellow in the Research Group, where he leads the Google Brain project. We talk about a closet full of robot arms (the arm farm!), image recognition for diabetic retinopathy, and equality in data and the community.

Fun Fact: Geoff Hinton’s distant relative invented the word tesseract. (How cool is that. Seriously.)


If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

June 2017 fundings, acquisitions, and IPOs

June, 2017 saw two robotics-related companies get $50 million each and 17 others raised $248 million for a monthly total of $348 million. Acquisitions also continued to be substantial with SoftBank's acquisition of Google's robotic properties Boston Dynamics and Schaft plus two others acquisitions.

Fundings

  • Drive.ai raised $50 million in a Series B funding round, led by New Enterprise Associates, Inc. (NEA) with participation from GGV Capital and existing investors including Northern Light Venture Capital. Andrew Ng who led AI projects at Baidu and Google (and is husband to Drive.ai’s co-founder and president Carol Reiley) joined the board of directors and said: 

    “The cutting-edge of autonomous driving has shifted squarely to deep learning. Even traditional autonomous driving teams have 'sprinkled on' some deep learning, but Drive.ai is at the forefront of leveraging deep learning to build a truly modern autonomous driving software stack.

  • Aera Technology, renamed from FusionOps, a Silicon Valley software and AI provider, raised $50 million from New Enterprise Associates. Aera seems to be the first RPA to actuate in the physical world. Merck uses Aera to predict demand, determine product routing and interact with warehouse management systems to enact what’s needed.

    “The leap from transactional automation to cognitive automation is imminent and it will forever transform the way we work,” says Frederic Laluyaux, President and CEO of Aera. “At Aera, we deliver the technology that enables the Self-Driving Enterprise: a cognitive operating system that connects you with your business and autonomously orchestrates your operations.”

  • Swift Navigation, the San Francisco tech firm building centimeter-accurate GPS technology to power a world of autonomous vehicles, raised $34 million in a Series B financing round led by New Enterprise Associates (NEA), with participation from existing investors Eclipse and First Round Capital. Swift provides solutions to over 2,000 customers-including autonomous vehicles, precision agriculture, unmanned aerial vehicles (UAVs), robotics, maritime, transportation/logistics and outdoor industrial applications. By moving GPS positioning from custom hardware to a flexible software-based receiver, Swift Navigation delivers Real Time Kinematics (RTK) GPS (100 times more accurate than traditional GPS) at a fraction of the cost ($2k) of alternative RTK systems.
  • AeroFarms raised over $34 million of a $40 million Series D. The New Jersey-based indoor vertical farming startup has raised over $130 million since 2014 and now has 9 operating indoor farms. AeroFarms grows leafy greens using aeroponics –- growing them in a misting environment without soil using LED lights, and growth algorithms. The round brings AeroFarms’ total fundraising to over $130 million since 2014 including a $40 million note from Goldman Sachs and Prudential.
  • Seven Dreamers Labs, a Tokyo startup commercializing the Laundroid robot, raised $22.8 million KKR’s co-founders Henry Kravis and George Roberts, Chinese conglomerate Fosun International, and others. Laundroid is being developed with Panasonic and Daiwa House.
  • Bowery Farming, which raised $7.5 million earlier this year, raised an additional $20 million from General Catalyst, GGV Capital and GV (formerly Google Ventures). Bowery’s first indoor farm in Kearny, NJ, uses proprietary computer software, LED lighting and robotics to grow leafy greens without pesticides and with 95% less water than traditional agriculture.
  • Drone Racing League raised $20 million in a Series B investment round led by Sky, Liberty Media and Lux Capital, and new investors Allianz and World Wrestling Entertainment, plus existing investors Hearst Ventures, RSE Ventures, Lerer Hippeau Ventures, and Courtside Ventures.
  • Momentum Machines, the SF-based startup developing a hamburger-making robot, raised $18.4 million in an equity offering of $21.8 million, from existing investors Lemnos Labs, GV, K5 Ventures and Khosla Ventures. The company has been working on its first retail location since at least June of last year. There is still no scheduled opening date for the flagship, though it's expected to be located in San Francisco's South of Market neighborhood.
  • AEye, a startup developing a solid state LiDAR and other vision systems for self-driving cars, raised $16 million in a Series A round led by  Kleiner Perkins Caufield & Byers, Airbus Ventures, Intel Capital, Tyche Partners and others.

    Said Luis Dussan, CEO of AEye. “The biggest bottleneck to the rollout of robotic vision solutions has been the industry’s inability to deliver a world-class perception layer. Quick, accurate, intelligent interpretation of the environment that leverages and extends the human experience is the Holy Grail, and that’s exactly what AEye intends to deliver.”

  • Snips,  an NYC voice recognition AI startup, raised $13 million in a Series A round led by MAIF Avenir with PSIM Fund managed by Bpifrance, as well as previous investor Eniac Ventures and K-Fund 1 and Korelya Capital, joining the round.  Snip makes an on-device system that parses and understands better than Amazon's Alexa.
  • Misty Robotics, a spin-out from Orbotix/Sphero, raised $11.5 million in Series A funding from Venrock, Foundry Group and others. Ian Bernstein, former Sphero co-founder and CTO, will be taking the role of Head of Product and is joined by five other autonomous robotics division team members. Misty Robotics will use its new capital to build out the team and accelerate product development. Sphero and Misty Robotics will have a close partnership and have signed co-marketing and co-development agreements.
  • Superflex, a spin-off from SRI, has raised $10.2 million in equity financing from 10 unnamed investors. Superflex is developing a powered suit designed for individuals experiencing mobility difficulties and working in challenging environments to support the wearer’s torso, hips and legs.
  • Nongtian Guanjia (FarmFriend), a Chinese drone/ag industry software startup, raised $7.36 million led by Gobi Partners and existing investors GGV Capital, Shunwei Capital, the Zhen Fund and Yunqi Partners.
  • Carmera, a NYC-based auto tech startup, unstealthed this week with $6.4M in funding led by Matrix Partners. The two-year-old company has been quietly collecting data for its 3D mapping solution, partnering with delivery fleets to install its sensor and data collection platform.
  • Cognata, an Israeli deep learning simulation startup, raised $5 million from Emerge, Maniv Mobility, and Airbus Ventures. Cognata recently launched a self-driving vehicle road-testing simulation package

    “Every autonomous vehicle developer faces the same challenge—it is really hard to generate the numerous edge cases and the wide variety of real-world environments. Our simulation platform rapidly pumps out large volumes of rich training data to fuel these algorithms,” said Cognata’s Danny Atsmon

  • SoftWear Automation, the GA Tech and DARPA sponsored startup developing sewing robots for apparel manufacturing, raised $4.5 million in a Series A round from CTW Venture Partners.
  • Knightscope, a startup developing robotic security technologies, raised $3 million from Konica Minolta. The capital is to be invested in Knightscope’s current Reg A+ “mini-IPO” offering of Series M Preferred Stock.
  • Multi Tower Co, a Danish medical device startup, raised around $1.12 million through a network of private and public investors most notable of which were Syddansk Innovation, Rikkesege Invest, M. Blæsbjerg Holding and Dahl Gruppen Holding. The Multi Tower Robot used to lift and move hospital patients, is developed through Blue Ocean Robotics’ partnership program, RoBi-X, in a public-private partnership (PPP) between University Hospital Køge, Multi Tower Company and Blue Ocean Robotics.
  • Optimus Ride, a MIT spinoff developing self-driving tech, raised $1.1 million in financing from an undisclosed investor.

Acquisitions

  • SoftBank acquired Boston Dynamics and Schaft from Google Alphabet for an undisclosed amount.
    • Boston Dynamics, a DARPA and DoD-funded 25 year old company, designs two and four-legged robots for the military. Videos of BD’s robots WildCat, Big Dog, Cheetah and most recently Handle, continue to be YouTube hits. Handle is a two-wheeled, four-legged hybrid robot that can stand, walk, run and roll at up to 9 MPH.
    • Schaft, a Japanese participant in the DARPA Robotics Challenge, recently unveiled an updated version of a two-legged robot that climbed stairs, can carry 125 pounds of payload, move in tight spaces and keep its balance throughout. 
  • IPG Photonics, a laser component manufacturer/integrator of welding and laser-cutting systems, including robotic ones, acquired Innovative Laser Technologies, a Minnesota laser systems maker, for $40 million. 
  • Motivo Engineering, an engineering product developer, has acquired Robodondo, an ag tech integrator focused on food processing, for an undisclosed amount.

IPOs

  • None. Nada. Zip.

Peering into neural networks

Neural networks learn to perform computational tasks by analyzing large sets of training data. But once they’ve been trained, even their designers rarely have any idea what data elements they’re processing.
Image: Christine Daniloff/MIT

By Larry Hardesty

Neural networks, which learn to perform computational tasks by analyzing large sets of training data, are responsible for today’s best-performing artificial intelligence systems, from speech recognition systems, to automatic translators, to self-driving cars.

But neural nets are black boxes. Once they’ve been trained, even their designers rarely have any idea what they’re doing — what data elements they’re processing and how.

Two years ago, a team of computer-vision researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) described a method for peering into the black box of a neural net trained to identify visual scenes. The method provided some interesting insights, but it required data to be sent to human reviewers recruited through Amazon’s Mechanical Turk crowdsourcing service.

At this year’s Computer Vision and Pattern Recognition conference, CSAIL researchers will present a fully automated version of the same system. Where the previous paper reported the analysis of one type of neural network trained to perform one task, the new paper reports the analysis of four types of neural networks trained to perform more than 20 tasks, including recognizing scenes and objects, colorizing grey images, and solving puzzles. Some of the new networks are so large that analyzing any one of them would have been cost-prohibitive under the old method.

The researchers also conducted several sets of experiments on their networks that not only shed light on the nature of several computer-vision and computational-photography algorithms, but could also provide some evidence about the organization of the human brain.

Neural networks are so called because they loosely resemble the human nervous system, with large numbers of fairly simple but densely connected information-processing “nodes.” Like neurons, a neural net’s nodes receive information signals from their neighbors and then either “fire” — emitting their own signals — or don’t. And as with neurons, the strength of a node’s firing response can vary.

In both the new paper and the earlier one, the MIT researchers doctored neural networks trained to perform computer vision tasks so that they disclosed the strength with which individual nodes fired in response to different input images. Then they selected the 10 input images that provoked the strongest response from each node.

In the earlier paper, the researchers sent the images to workers recruited through Mechanical Turk, who were asked to identify what the images had in common. In the new paper, they use a computer system instead.

“We catalogued 1,100 visual concepts — things like the color green, or a swirly texture, or wood material, or a human face, or a bicycle wheel, or a snowy mountaintop,” says David Bau, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We drew on several data sets that other people had developed, and merged them into a broadly and densely labeled data set of visual concepts. It’s got many, many labels, and for each label we know which pixels in which image correspond to that label.”

The paper’s other authors are Bolei Zhou, co-first author and fellow graduate student; Antonio Torralba, MIT professor of electrical engineering and computer science; Aude Oliva, CSAIL principal research scientist; and Aditya Khosla, who earned his PhD as a member of Torralba’s group and is now the chief technology officer of the medical-computing company PathAI.

The researchers also knew which pixels of which images corresponded to a given network node’s strongest responses. Today’s neural nets are organized into layers. Data are fed into the lowest layer, which processes them and passes them to the next layer, and so on. With visual data, the input images are broken into small chunks, and each chunk is fed to a separate input node.

For every strong response from a high-level node in one of their networks, the researchers could trace back the firing patterns that led to it, and thus identify the specific image pixels it was responding to. Because their system could frequently identify labels that corresponded to the precise pixel clusters that provoked a strong response from a given node, it could characterize the node’s behavior with great specificity.

The researchers organized the visual concepts in their database into a hierarchy. Each level of the hierarchy incorporates concepts from the level below, beginning with colors and working upward through textures, materials, parts, objects, and scenes. Typically, lower layers of a neural network would fire in response to simpler visual properties — such as colors and textures — and higher layers would fire in response to more complex properties.

But the hierarchy also allowed the researchers to quantify the emphasis that networks trained to perform different tasks placed on different visual properties. For instance, a network trained to colorize black-and-white images devoted a large majority of its nodes to recognizing textures. Another network, when trained to track objects across several frames of video, devoted a higher percentage of its nodes to scene recognition than it did when trained to recognize scenes; in that case, many of its nodes were in fact dedicated to object detection.

One of the researchers’ experiments could conceivably shed light on a vexed question in neuroscience. Research involving human subjects with electrodes implanted in their brains to control severe neurological disorders has seemed to suggest that individual neurons in the brain fire in response to specific visual stimuli. This hypothesis, originally called the grandmother-neuron hypothesis, is more familiar to a recent generation of neuroscientists as the Jennifer-Aniston-neuron hypothesis, after the discovery that several neurological patients had neurons that appeared to respond only to depictions of particular Hollywood celebrities.

Many neuroscientists dispute this interpretation. They argue that shifting constellations of neurons, rather than individual neurons, anchor sensory discriminations in the brain. Thus, the so-called Jennifer Aniston neuron is merely one of many neurons that collectively fire in response to images of Jennifer Aniston. And it’s probably part of many other constellations that fire in response to stimuli that haven’t been tested yet.

Because their new analytic technique is fully automated, the MIT researchers were able to test whether something similar takes place in a neural network trained to recognize visual scenes. In addition to identifying individual network nodes that were tuned to particular visual concepts, they also considered randomly selected combinations of nodes. Combinations of nodes, however, picked out far fewer visual concepts than individual nodes did — roughly 80 percent fewer.

“To my eye, this is suggesting that neural networks are actually trying to approximate getting a grandmother neuron,” Bau says. “They’re not trying to just smear the idea of grandmother all over the place. They’re trying to assign it to a neuron. It’s this interesting hint of this structure that most people don’t believe is that simple.”

The Drone Center’s Weekly Roundup: 7/3/17

The OR-3 autonomous security robot will begin patrolling parts of Dubai. Credit: Otsaw Digital

At the Center for the Study of the Drone

In a podcast at The Drone Radio Show, Arthur Holland Michel discusses the Center for the Study of the Drone’s recent research on local drone regulations, public safety drones, and legal incidents involving unmanned aircraft.

In a series of podcasts at the Center for a New American Security, Dan Gettinger discusses trends in drone proliferation and the U.S. policy on drone exports.

News

The U.S. Court of Appeals for the D.C. Circuit dismissed a lawsuit over the death of several civilians from a U.S. drone strike in Yemen, concurring with the decision of a lower court. In the decision, Judge Janice Rogers Brown argued that Congress had nevertheless failed in its oversight of the U.S. military. (The Hill)

Commentary, Analysis, and Art

At the Bulletin of Atomic Scientist, Michael Horowitz argues that the Missile Technology Control Regime is poorly suited to manage international drone proliferation.

At War on the Rocks, Joe Chapa argues that debates over the ethics of drone strikes are often clouded by misconceptions.

At Phys.org, Julien Girault writes that Chinese drone maker DJI is looking at how its consumer drones can be applied to farming.

At IHS Jane’s Navy International, Anika Torruella looks at how the U.S. Navy is investing in unmanned and autonomous technologies.

Also at IHS Jane’s, Anika Torruella writes that the U.S. Navy does not plan to include large unmanned undersea vehicles as part of its 355-ship fleet goal.

At Defense One, Brett Velicovich looks at how consumer drones can easily be altered to carry a weapons payload.

At Aviation Week, James Drew considers how U.S. drone firm General Atomics is working to develop the next generation of drones.

At Popular Science, Kelsey D. Atherton looks at how legislation in California could prohibit drone-on-drone cage fights.

At the Charlotte Observer, Robin Hayes argues that Congress should not grant Amazon blanket permission to fly delivery drones.

At the MIT Technology Review, Bruce Y. Lee argues that though medicine-carrying drones may be expensive, they will save lives.

In a speech at the SMi Future Armoured Vehicles Weapon Systems conference in London, U.S. Marine Corps Colonel Jim Jenkins discussed the service’s desire to use small, cheap autonomous drones on the battlefield. (IHS Jane’s 360)

At the Conversation, Andres Guadamuz considers whether the works of robot artists should be protected by copyright.

Know Your Drone

A team at the MIT Computer Science and Artificial Intelligence Laboratory has built a multirotor drone that is also capable of driving around on wheels like a ground robot. (CNET)

Facebook conducted a test flight of its Aquila solar-powered Internet drone. (Fortune)

Meanwhile, China Aerospace Science and Technology Corporation conducted a 15-hour test flight of its Cai Hong solar-powered drone at an altitude of over 65,000 feet. (IHS Jane’s 360)

The Defense Advanced Research Projects Agency successfully tested autonomous quadcopters that were able to navigate a complex obstacle course without GPS. (Press Release)

French firm ECA group is modifying its IT180 helicopter drone for naval operations. (Press Release)

Italian firm Leonardo plans to debut its SD-150 rotary-wing military drone in the third quarter of 2017. (IHS Jane’s 360)

Researchers at MIT are developing a drone capable of remaining airborne for up to five days at a time. (TechCrunch)

Drones at Work

The government of Malawi and  humanitarian agency Unicef have launched an air corridor to test drones for emergency response and medical deliveries. (BBC)

French police have begun using drones to search for migrants crossing the border with Italy. (The Telegraph)

Researchers from Missouri University have been testing drones to conduct inspections of water towers. (Missourian)

An Australian drug syndicate reportedly used aerial drones to run counter-surveillance on law enforcement officers during a failed bid to import cocaine into Melbourne. (BBC)

In a simulated exercise in New Jersey, first responders used a drone to provide temporary cell coverage to teams on the ground. (AUVSI)

The International Olympic Committee has announced that chipmaker Intel will provide drones for light shows at future Olympic games. (CNN)

The U.S. Air Force has performed its first combat mission with the new Block 5 variant of the MQ-9 Reaper. (UPI)

The police department in West Seneca, New York has acquired a drone. (WKBW)

Chinese logistics firm SF Express has obtained approval from the Chinese government to operate delivery drones over five towns in Eastern China. (GBTimes)

Portugal’s Air Traffic Accident Prevention and Investigation Office is leading an investigation into a number of close encounters between drones and manned aircraft in the country’s airspace. (AIN Online)

The U.S. Federal Aviation Administration and app company AirMap are developing a system that will automate low-altitude drone operation authorizations. (Drone360)

Police in Arizona arrested a man for allegedly flying a drone over a wildfire. (Associated Press)

Dubai’s police will deploy the Otsaw Digital O-R3, an autonomous security robot equipped with facial recognition software and a built-in drone, to patrol difficult-to-reach areas. (Washington Post)

The University of Southampton writes that Boaty McBoatface, an unmanned undersea vehicle, captured “unprecedented data” during its voyage to the Orkney Passage.

Five flights were diverted from Gatwick Airport when a drone was spotted flying nearby. (BBC)

Industry Intel

The U.S. Special Operations Command awarded Arcturus UAV a contract to compete in the selection of the Mid-Endurance Unmanned Aircraft System. AAI Corp. and Insitu are also competing. (DoD)

The U.S. Air Force awarded General Atomics Aeronautical a $27.6 million contract for the MQ-9 Gen 4 Predator primary datalink. (DoD)

The U.S. Army awarded AAI Corp. a $12 million contract modification for the Shadow v2 release 6 system baseline update. (DoD)

The U.S. Army awarded DBISP a $73,392 contract for 150 quadrotor drones made by DJI and other manufacturers. (FBO)

The Department of the Interior awarded NAYINTY3 a $7,742 contract for the Agisoft Photo Scan, computer software designed to process images from drones. (FBO)

The Federal Aviation Administration awarded Computer Sciences Corporation a $200,000 contract for work on drone registration. (USASpending)

The U.S. Navy awarded Hensel Phelps a $36 million contract to build a hangar for the MQ-4C Triton surveillance drone at Naval Station Mayport in Florida. (First Coast News)

The U.S. Navy awarded Kratos Defense & Security Solutions a $35 million contract for the BQM-177A target drones. (Military.com)

NATO awarded Leonardo a contract for logistic and support services for the Alliance Ground Surveillance system. (Shephard Media)

Clobotics, a Shanghai-based startup that develops artificial intelligence-equipped drones for infrastructure inspection, announced that it has raised $5 million in seed funding. (GeekWire)

AeroVironment’s stock fell despite a $124.4 million surge in revenue in its fiscal fourth quarter. (Motley Fool)

Ford is creating the Robotics and Artificial Intelligence Research team to study emerging technologies. (Ford Motor Company)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Building with robots and 3D printers: Construction of the DFAB HOUSE up and running

At the Empa and Eawag NEST building in Dübendorf, eight ETH Zurich professors as part of the Swiss National Centre of Competence in Research (NCCR) Digital Fabrication are collaborating with business partners to build the three-storey DFAB HOUSE. It is the first building in the world to be designed, planned and built using predominantly digital processes.

Robots that build walls and 3D printers that print entire formworks for ceiling slabs – digital fabrication in architecture has developed rapidly in recent years. As part of the National Centre of Competence in Research (NCCR) Digital Fabrication, architects, robotics specialists, material scientists, structural engineers and sustainability experts from ETH Zurich have teamed up with business partners to put several new digital building technologies from the laboratory into practice. Construction is taking place at NEST, the modular research and innovation building that Empa and Eawag built on their campus in Dübendorf to test new building and energy technologies under real conditions. NEST offers a central support structure with three open platforms, where individual construction projects – known as innovation units – can be installed. Construction recently began on the DFAB HOUSE.

Digitally Designed, Planned and Built
The DFAB HOUSE is distinctive in that it was not only digitally designed and planned, but is also built using predominantly digital processes. With this pilot project, the ETH professors want to examine how digital technology can make construction more sustainable and efficient, and increase the design potential. The individual components were digitally coordinated based on the design and are manufactured directly from this data. The conventional planning phase is no longer needed. As of summer 2018, the three-storey building, with a floor space of 200 m2, will serve as a residential and working space for Empa and Eawag guest researchers and partners of NEST.

Four New Building Methods Put to the Test
At the DFAB HOUSE, four construction methods are for the first time being transferred from research to architectural applications. Construction work began with the Mesh Mould technology, which received the Swiss Technology Award at the end of 2016. The result will be a double-curved load-bearing concrete wall that will shape the architecture of the open-plan living and working area on the ground floor. A “Smart Slab” will then be installed – a statistically optimised and functionally integrated ceiling slab, for which the researchers used a large-format 3D sand printer to manufacture the formwork.

Smart Dynamic Casting technology is being used for the façade on the ground floor: the automated robotic slip-forming process can produce tailor-made concrete façade posts. The two upper floors, with individual rooms, are being prefabricated at ETH Zurich’s Robotic Fabrication Laboratory using spatial timber assemblies; cooperating robots will assemble the timber construction elements.

More Information in ETH Zurich Press Release and on Project Website
Detailed information about the building process, quotes as well as image and video material can be found in the extended press release by ETH Zurich. In addition, a project website for the DFAB HOUSE is currently in development and will soon be available at the following link: www.dfabhouse.ch. Until then, Empa’s website offers information about the project: https://www.empa.ch/web/nest/digital-fabrication

NCCR Investigators Involved with the DFAB HOUSE:
Prof. Matthias Kohler, Chair of Architecture and Digital Fabrication
Prof. Fabio Gramazio, Chair of Architecture and Digital Fabrication
Prof. Benjamin Dillenburger, Chair for Digital Building Technologies
Prof. Joseph Schwartz, Chair of Structural Design
Prof. Robert Flatt, Institute for Building Materials
Prof. Walter Kaufmann, Institute of Structural Engineering
Prof. Guillaume Habert, Institute of Construction & Infrastructure Management
Prof. Jonas Buchli, Institute of Robotics and Intelligent Sys

Image credits: TBD
Image caption: TBD

tems

Can we test robocars the way we tested regular cars?

I’ve written a few times that perhaps the biggest unsolved problem in robocars is how to know we have made them safe enough. While most people think of that in terms of government certification, the truth is that the teams building the cars are very focused on this, and know more about it than any regulator, but they still don’t know enough. The challenge is going to be convincing your board of directors that the car is safe enough to release, for if it is not, it could ruin the company that releases it, at least if it’s a big company with a reputation.

We don’t even have a good definition of what “safe enough” is though most people are roughly taking that as “a safety record superior to the average human.” Some think it should be much more, few think it should be less. Tesla, now with the backing of the NTSB, has noted that their autopilot system — combined with a mix of mostly attentive but some inattentive humans, may have a record superior to the average human, for example, even though with the inattentive humans it is worse.

Last week I attended a conference in Stuttgart devoted to robocar safety testing, part of a larger auto show including an auto testing show. It was interesting to see the main auto testing show — scores of expensive and specialized machines and tools that subject cars to wear and tear, slamming doors thousands of times, baking the surfaces, rattling and vibrating everything. And testing the electronics, too.

In Europe, the focus of testing is very strongly on making sure you are compliant with standards and regulations. That’s true in the USA but not quite as much. It was in Europe some time ago that I learned the word “homologation” which names this process.


There is a lot to be learned from the previous regimes of testing. They have built a lot of tools and learned techniques. But robocars are different beasts, and will fail in different ways. They will definitely not fail the way human drivers do, where usually small things are always going wrong, and an accident happens when 2 or 3 things go wrong at once. The conference included a lot of people working on simulation, which I have been promoting for many years. The one good thing in the NHTSA regulations — the open public database of all incidents — may vanish in the new rules, and it would have made for a great simulator. The companies making the simulators (and the academic world) would have put every incident into a shared simulator so every new car could test itself in every known problem situation.

Still, we will see lots of simulators full of scenarios, and also ways to parameterize them. That means that instead of just testing how a car behaves if somebody cuts it off, you test what it does if it gets cut off with a gap of 1cm, or 10cm, or 1m, or 2m, and by different types of vehicles, and by two at once etc. etc. etc. The nice thing about computers is you can test just about every variation you can think of, and test it in every road situation and every type of weather, at least if your simulator is good enough,

Yoav Hollander, who I met when he came as a student to the program at Singularity U, wrote a report on the approaches to testing he saw at the conference that contains useful insights, particularly on this question of new and old thinking, and what regulations drive vs. liability and fear of the public. He puts it well — traditional and certification oriented testing has a focus on assuring you don’t have “expected bugs” but is poor at finding unexpected ones. Other testing is about finding unexpected bugs. Expected bugs are of the “we’ve seen this sort of thing before, we want to be sure you don’t suffer from it” kind. Unexpected bugs are “something goes wrong that we didn’t know to look for.”

Avoiding old thinking

I believe that we are far from done on the robocar safety question. I think there are startups who have not yet been founded who, in the future, will come up with new techniques both for promoting safety and testing it that nobody has yet thought of. As such, I strongly advise against thinking that we know very much about how to do it yet.

A classic example of things going wrong is the movement towards “explainable AI.” Here, people are concerned that we don’t really know how “black box” neural network tools make the decisions they do. Car regulations in Europe are moving towards banning software that can’t be explained in cars. In the USA, the draft NHTSA regulations also suggest the same thing, though not as strongly.

We may find ourselves in a situation where we take to systems for robocars, one explainable and the other not. We put them through the best testing we can, both in simulator and most importantly in the real world. We find the explainable system has a “safety incident” every 100,000 miles, and the unexplainable system has an incident every 150,000 miles. To me it seems obvious that it would be insane to make a law that demands the former system which, when deployed, will hurt more people. We’ll know why it hurt them. We might be better at fixing the problems, but we also might not — with the unexplainable system we’ll be able to make sure that particular error does not happen again, but we won’t be sure that others very close it it are eliminated.

Testing in sim is a challenge here. In theory, every car should get no errors in sim, because any error found in sim will be fixed or judged as not really an error or so rare as to be unworthy of fixing. Even trained machine learning systems will be retrained until they get no errors in sim. The only way to do this sort of testing in sim will be to have teams generate brand new scenarios in sim that the cars have never seen, and see how they do. We will do this, but it’s hard. Particularly because as the sims get better, there will be fewer and fewer real world situations they don’t contain. At best, the test suite will offer some new highly unusual situations, which may not be the best way to really judge the quality of the cars. In addition, teams will be willing to pay simulator companies well for new and dangerous scenarios in sim for their testing — more than the government agencies will pay for such scenarios. And of course, once a new scenario displays a problem, every customer will fix it and it will become much less valuable. Eventually, as government regulations become more prevalent, homologation companies will charge to test your compliance rate on their test suites, but again, they will need to generate a new suite every time since everybody will want the data to fix any failure. This is not like emissions testing, where they tell you that you went over the emissions limit, and it’s worth testing the same thing again.

The testing was interesting, but my other main focus was on the connected car and security sessions. More on that to come.

Page 338 of 346
1 336 337 338 339 340 346