Intel and Warner made a splash at the LA Auto Show announcing how Warner will develop entertainment for viewing while riding in robotaxis. It’s not just movies to watch, their hope is to produce something more like an amusement park ride to keep you engaged on your journey.
Like most partnership announcements around robocars, this one is mainly there for PR since they haven’t built anything yet. The idea is both interesting and hype.
I’ll start with the negative. I think people will carry their entertainment with them in their pockets, and not want it from their cars. Why would I want a different music system with a different interface when my own music and videos are already curated by me and stored in my phone? All I really want is a speaker and screen to display them on.
This is becoming very clear on planes, where I prefer to watch movies I have pre-downloaded on my phone than what is on the bigger screen of the in-flight entertainment system. There are several reasons for that:
The UIs on most in-flight systems suck really, really badly. I mean it’s amazing how bad most of them are. (Turns out there is a reason for that.) Cars will probably do it better but the history is not promising.
Your personal device is usually newer with more advanced technology because you replace it every 2 years. You have curated the content in it and know the interface.
On airplanes in particular, they believe rules force them to pause your experience so that they can announce that duty free sales are now open in 3 languages. And 20 or more other interruptions, only a couple of which are actually important to hear for an experienced flyer.
So Warner is wise in putting a focus on doing something you can’t do with your personal gear, such as a VR experience, or immersive screens around the car. There is a unique opportunity to tune the VR experience to the actual motions of the car. In traffic, you can only tune to the needed motions. On the open road, you might actually be able to program a trip that deliberately slows or speeds up or turns when nobody else is around to make a cool VR experience.
While that might be nice, it would be mostly a gimmick, more like a ride you try once. I don’t think people will want to go everywhere in the batmobile. As such it will be more of a niche, or marketing trick.
More interesting is the ability to reduce carsickness with audio-visual techniques. Some people get pretty queasy if they look down for a long time at a book or laptop. Others are less bothered. A phone held in the hand seems to be easier to use for most than something heavier, perhaps because it moves with the motion of the car. For many years I have proposed that cars communicate their upcoming plans with subtle audio or visual cues so that people know when they are about to turn or slow down. Some experiments are now being reported on this and it will be interesting to see the results.
If you ride on a subway, bus or commuter train today, the scene is now always the same. A row of people, all staring at their phones.
Advertising
Some commenters have speculated that another goal here may be to present advertising to hapless taxi passengers. After all, ride a New York cab and many others and you will see an annoying video loop playing. Each time you have to go through the menus to mute the volume. With hailed taxis, you can’t shop, and so they can also get away with doing this — what are you going to do, get out of the cab and wait for the next one?
I hope that with mobile-phone hail, competition prevents this sort of attempt to monetize the time of the customer. I definitely want my peace and quiet, and the revenue from the advertising — typically well under a dollar an hour — can’t possibly offset that for me.
Darrell Etherington for TechCrunch: Its first product is a sensor-laden suit that a person can wear to demonstrate actions so that a robot can then replicate what they do.
You can now watch all the videos online, including talks by J. Andrew Bagnell (CMU), Rodney Brooks (Rethink Robotics, MIT), Anca Dragan (UC Berkeley), Yann LeCun (Facebook, NYU) and Stefanie Tellex (Brown University).
We’ll also be posting Robohub Podcast interviews done at the conference – so stay tuned!
Twenty-two different startups were funded in November cumulatively raising $782 million, down a bit from the $862 million in October. The big $400 million funding for UBTech and the $55 million for TuSimple were the only two fundings over $50 million and they were both for Chinese startups with funding from Chinese VC firms.
Six acquisitions were reported during the month including another takeover of a European robotics company by a Chinese one.
On the IPO front, three existing publicly-traded stocks announced additional shares to raise additional funds.
Fundings:
Ubtech, a Shenzhen-based humanoid robots maker startup, raised $400 million in a Series C round led by Tencent Holdings (which invested $40 million in the round). Ubtech (Union Brothers Technology) builds and sells toy robots. Their most recent is a $300 Star Wars Stormtrooper robot which will ship just before the movie debuts mid-December.
TuSimple, a Chinese startup providing autonomous driving technology for the trucking industry, raised $55 million in a Series C round led by Fuhe Capital with Zhiping Capital and SINA Corp. Note that TuSimple raised $20 million in August in a Series B round.
Markforged, a Watertown, MA maker of carbon fiber and metal 3D printing devices, raised $30 million in a Series C round led by Siemens’ next47 venture firm. Microsoft Ventures and Porsche SE also participated along with existing investors Matrix Partners, North Bridge Venture Partners, and Trinity Ventures.
Kindred Systems, a Toronto warehousing AI startup, raised $27.5 million in a Series B round led by First Round Capital with participation by Tencent Holdings and Eclipse. Kindred is building human-like intelligence in machines. It’s first commercial offering is Kindred Sort, a put wall integration of arm, gripper and software to pick and sort random objects.
Locus Robotics, an Andover, MA-based provider of mobile robots for e-commerce fulfillment warehouses, raised $25 million in a Series B funding led by Scale Venture Partners with participation by existing investors.
Optimus Ride, a MIT spinoff company developing self-driving technology, raised $18 million in Series A funding. Greycroft Partners led the round, and was joined by investors including Emerson Collective, Fraser McCombs Capital and Joi Ito.
Bossa Nova, a San Francisco-based developer of autonomous service robots for the global retail industry, raised $17.5 million in Series B funding. Paxion led the round, and was joined by investors including Intel Capital, WRV Capital, Lucas Venture Group, and Cota Capital.
Riverfield Surgical Robot Lab, a Japanese startup and spin-off from the Tokyo Inst of Technology, raised $10 million in a Series B round from Toray Engineering, SBI Investment and JAFCO Japan.
Arbe Robotics, an Israeli radar collision avoidance platform, raised $9 million in a Series A round. O.G. Tech Ventures and OurCrowd led the round, with previous investors Canaan Partners, iAngels, and Taya Ventures also participating. Arbe is also developing radar for autonomous vehicles that facilitates real-time mapping at distances up to 300 meters.
AUBO Robotics (prev named Smokie), a Chinese co-bot manufacturer, raised $9 million in a Series A round by Fosun RZ Capital. “When we decided to manufacture in China we had to be incorporated in China to get the incentives. They had to have a name change because the laws in China state that the name be a Chinese name. AUBO or AU BO loosely translated means ‘New Technology’. The AUBO-i5 production is in Changzhou and we also have R&D in Beijing and Knoxville TN,” said Aubo’s VP of Sales Peter Farkas.
Beijing TechX Aviation Innovation, a Chinese drone startup for military and high-end industrial users, raised $7.5 million in a Series A round from Fosun RZ Capital.
Leju Robotics, a Shenzhen startup developing humanoid robots for the service industry, raised $7.4 million (in August) from Green Pine Capital Partners and Tencent.
Embodied Intelligence, an Emeryville, CA startup developing teaching AI for existing robots, raised $7 million in a seed round led by Amplify Partners with participation from Lux Capital, SV Angels, FreeS, 11.2 Capital and A.Capital.
Apis Cor, a Moscow startup using a massive robotic 3D device for printing concrete, raised $6 million (in September) in a seed round from Rusnano Sistema Sicar venture fund.
Rokae, a Chinese startup making lightweight/lightload industrial robot arms, raised $6 million in a Series A round from THG Ventures, the venture arm of Tsinghua Holdings, and Delin Capital.
AerDrone Intl, a startup of Irelandia Aviation Drones, both of Dublin, raised $5 million in seed funding from Irelandia to provide leasing funding for drone users.
GJS Robot, a Shenzhen startup making personalized fighting robots known as Ganker robots, raised an undisclosed Series A investment estimated to be $5 million, from Tencent Holdings.
Catalia Health, a San Francisco-based healthcare startup providing an AI-powered patient engagement platform, raised $4 million in pre-Series A funding. Ion Pacific led the round.
TortugaAgTech, a Colorado ag robotics startup, raised $2.6 million (in September) from SVG Partners and Thrive AgTech Ventures. Tortuga is developing robotics for indoor farming operations.
Ceres Imaging, the Oakland, CA aerial imagery and analytics company, raised an additional $2.5 million for their May, 2017 Series A round (which raised $5M). Romulus Capital was the sole investor.
Wink Robotics, a startup focused on using robotics, AI and machine vision for the beauty industry, raised $1.73 million (in August) in seed funding from undisclosed sources.
Natilus, a San Jose, Calif.-based maker of large aircraft drones to haul freight, raised seed funding of an undisclosed amount. Investors included Starburst Ventures, Seraph Group, Gelt VC, Outpost Capital and Draper Associates.
Acquisitions:
Dash Robotics, a Hayward, CA connected toys developer, acquired Austin, TX-based Bots Alive, a robotics and AI hobby kit and toy developer, for an undisclosed amount.
Huachangda Intelligent Equipment, a Chinese industrial robot integrator servicing primarily the auto industry, has acquired Swedish Robot System Products (RSP), for an undisclosed amount. RSP manufacturers grippers, welding systems, tool changers and other peripheral products for robots.
Atronix Engineering, a GA-based industrial robot system integrator, was acquired by MHS (Material Handling Systems), a KY-based integrator of material handling systems, for an undisclosed amount. In April, 2017, MHS was itself acquired by Thomas H. Lee Partners, a Boston-based equity fund.
Mapbox, a DC and SF nav systems provider for car companies, acquired a 4-person Minsk, Belarus mapping startup, MapData, for an undisclosed amount. Just last month Mapbox raised $164 million in a round led by the SoftBank Vision Fund. The deal spearheads the hiring of more engineers to help build its next big product, an SDK that will let developers build augmented reality-based maps into their apps that will work by way of the front-facing cameras on people’s devices.
Argo AI, a Pittsburgh autonomous vehicles and AI startup, using some of the $1 billion it raised from Ford, acquired Princeton Lightwave for an undisclosed amount. Princeton is a New Jersey manufacturer of real-time Geiger-mode LiDAR technology.
Tesla acquired Perbix, a Minnesota integrator of automated machines and industrial robotics that had been a contract supplier to Tesla for many years, for around $10.5 million.
IPOs:
Titan Medical(TSE:TMD), a Canadian single-port surgical robot device maker, announced an offering of shares for a minimum of $14,000,000 and a maximum $18,000,000. Titan will issue shares at a price of CDN $0.50 per Unit and each Unit is comprised of one common share and one warrant, exercisable for one Common Share at a price of CDN $0.60, for a period of 5 years following the closing of the Offering.
Myomo (NYSEMKT:MYO), a Cambridge, MA-based exoskeleton provider, announced an offering of 1.5 million shares of common stock and warrants to purchase an additional 750,000 shares at a price of $6.25 per share for 1/2 a warrant. Myomo hopes to raise $8.5 on the initial offering with an additional $1.4 million from an underwriter’s option.
Fastbrick Robotics (ASX:FBR), an Australian brick-laying startup, raised $26.5 million by offering 184 million shares in a private placement. This is in addition to the $2 million investment in July by Caterpillar who will be manufacturing, selling and servicing Fastbrick’s technology mounted on Caterpillar equipment.
Silicon Designs Introduces Inertial-Grade MEMS Capacitive Accelerometers
with Internal Temperature Sensor and Improved Low-Noise Performance Five Full Standard G-Ranges from ±2 g to ±50 g Now Available for Immediate Customer Shipment
November 9, 2017, Kirkland, Washington, USA – Silicon Designs, Inc. (www.SiliconDesigns.com), a 100% veteran owned, U.S. based leading designer and manufacturer of highly rugged MEMS capacitive accelerometer chips and modules, today announced the immediate availability of its Model 1525 Series, a family of commercial and inertial-grade MEMS capacitive accelerometers, offering industry-best-in-class low-noise performance.
Design of the Model 1525 Series incorporates Silicon Designs’ own high-performance MEMS variable capacitive sense element, along with a ±4.0V differential analog output stage, internal temperature sensor and integral sense amplifier — all housed within a miniature, nitrogen damped, hermetically sealed, surface mounted J-lead LCC-20 ceramic package (U.S. Export Classification ECCN 7A994). The 1525 Series features low-power (+5 VDC, 5 mA) operation, excellent in-run bias stability, and zero cross-coupling. Five unique full-scale ranges, of ±2 g, ±5 g, ±10 g, ±25 g, and ±50 g, are currently in production and available for immediate customer shipment. Each MEMS accelerometer offers reliable performance over a standard operating temperature range of -40° C to +85° C. Units are also relatively insensitive to wide temperature changes and gradients. Each device is marked with a serial number on its top and bottom surfaces for traceability. A calibration test sheet is supplied with each unit, showing measured bias, scale factor, linearity, operating current, and frequency response.
Carefully regulated manufacturing processes ensure that each sensor is made to be virtually identical, allowing users to swap out parts in the same g range with few-to-no testing modifications, further saving time and resources. This provides test engineers with a quick plug-and-play solution for almost any application, with total trust in sensor accuracy when used within published specifications. As the OEM of its own MEMS capacitive accelerometer chips and modules, Silicon Designs further ensures the manufacture of consistently high-quality products, with full in-house customization capabilities to customer exacting standards. This flexibility ensures that Silicon Designs can expeditiously design, develop and manufacture high-quality standard and custom MEMS capacitive accelerometers, yet still keep prices highly competitive.
The Silicon Designs Model 1525 Series tactical grade MEMS inertial accelerometer family is ideal for zero-to-medium frequency instrumentation applications that require high-repeatability, low noise, and maximum stability, including tactical guidance systems, navigation and control systems (GN&C), AHRS, unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), remotely operated vehicles (ROVs), robotic controllers, flight control systems, and marine- and land-based navigational systems. They may also be used to support critical industrial test requirements, such as those common to agricultural, oil and gas drilling, photographic and meteorological drones, as well as seismic and inertial measurements.
Since 1983, the privately held Silicon Designs has served as leading industry experts in the design, development and manufacture of highly rugged MEMS capacitive accelerometers and chips with integrated amplification, operating from its state-of-the-art facility near Seattle, Washington, USA. From the company’s earliest days, developing classified components for the United States Navy under a Small Business Innovation and Research (SBIR) grant, to its later Tibbetts Award and induction into the Space Technology Hall of Fame, Silicon Designs applies nearly 35 years of MEMS R&D innovation and applications engineering expertise into all finished product designs. For additional information on the Model 1525 Series or other MEMS capacitive sensing technologies offered by Silicon Designs, visit www.silicondesigns.com.
-###-
About Silicon Designs, Inc. Silicon Designs was founded in 1983 with the goal of improving the accepted design standard for traditional MEMS capacitive accelerometers. At that time, industrial-grade accelerometers were bulky, fragile and costly. The engineering team at Silicon Designs listened to the needs of customers who required more compact, sensitive, rugged and reasonably priced accelerometer modules and chips, though which also offered higher performance. Resultant product lines were designed and built to surpass customer expectations. The company has grown steadily over the years, while its core competency remains accelerometers, with the core business philosophies of “make it better, stronger, smaller and less expensive” and “let the customer drive R&D” maintained to this day.
GoodAI and AI Roadmap Institute Tokyo, ARAYA headquarters, October 13, 2017
Authors: Marek Rosa, Olga Afanasjeva, Will Millership (GoodAI)
Workshop participants: Olga Afanasjeva (GoodAI), Shahar Avin (CSER), Vlado Bužek (Slovak Academy of Science), Stephen Cave (CFI), Arisa Ema (University of Tokyo), Ayako Fukui (Araya), Danit Gal (Peking University), Nicholas Guttenberg (Araya), Ryota Kanai (Araya), George Musser (Scientific American), Seán Ó hÉigeartaigh (CSER), Marek Rosa (GoodAI), Jaan Tallinn (CSER, FLI), Hiroshi Yamakawa (Dwango AI Laboratory)
Summary
It is important to address the potential pitfalls of a race for transformative AI, where:
Key stakeholders, including the developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization
The fruits of the technology won’t be shared by the majority of people to benefit humanity, but only by a selected few
Race dynamics may develop regardless of the motivations of the actors. For example, actors may be aiming to develop a transformative AI as fast as possible to help humanity, to achieve economic dominance, or even to reduce costs of development.
There is already an interest in mitigating potential risks. We are trying to engage more stakeholders and foster cross-disciplinary global discussion.
We held a workshop in Tokyo where we discussed many questions and came up with new ones which will help facilitate further work.
The General AI Challenge Round 2: Race Avoidance will launch on 18 January 2018, to crowdsource mitigation strategies for risks associated with the AI race.
What we can do today:
Study and better understand the dynamics of the AI race
Figure out how to incentivize actors to cooperate
Build stronger trust in the global community by fostering discussions between diverse stakeholders (including individuals, groups, private and public sector actors) and being as transparent as possible in our own roadmaps and motivations
Avoid fearmongering around both AI and AGI which could lead to overregulation
Discuss the optimal governance structure for AI development, including the advantages and limitations of various mechanisms such as regulation, self-regulation, and structured incentives
Call to action — get involved with the development of the next round of the General AI Challenge
Introduction
Research and development in fundamental and applied artificial intelligence is making encouraging progress. Within the research community, there is a growing effort to make progress towards general artificial intelligence (AGI). AI is being recognized as a strategic priority by a range of actors, including representatives of various businesses, private research groups, companies, and governments. This progress may lead to an apparent AI race, where stakeholders compete to be the first to develop and deploy a sufficiently transformative AI [1,2,3,4,5].Such a system could be either AGI, able to perform a broad set of intellectual tasks while continually improving itself, or sufficiently powerful specialized AIs.
“Business as usual” progress in narrow AI is unlikely to confer transformative advantages. This means that although it is likely that we will see an increase in competitive pressures, which may have negative impacts on cooperation around guiding the impacts of AI, such continued progress is unlikely to spark a “winner takes all” race. It is unclear whether AGI will be achieved in the coming decades, or whether specialised AIs would confer sufficient transformative advantages to precipitate a race of this nature. There seems to be less potential of a race among public actors trying to address current societal challenges. However, even in this domain there is a strong business interest which may in turn lead to race dynamics. Therefore, at present it is prudent not to rule out any of these future possibilities.
The issue has been raised that such a race could create incentives to neglect either safety procedures, or established agreements between key players for the sake of gaining first mover advantage and controlling the technology [1]. Unless we find strong incentives for various parties to cooperate, at least to some degree, there is also a risk that the fruits of transformative AI won’t be shared by the majority of people to benefit humanity, but only by a selected few.
We believe that at the moment people present a greater risk than AI itself, and that AI risks-associated fearmongering in the media can only damage constructive dialogue.
Workshop and the General AI Challenge
GoodAI and the AI Roadmap Institute organized a workshop in the Araya office in Tokyo, on October 13, 2017, to foster interdisciplinary discussion on how to avoid pitfalls of such an AI race.
Workshops like this are also being used to help prepare the AI Race Avoidance round of the General AI Challenge which will launch on 18 January 2018.
The worldwide General AI Challenge, founded by GoodAI, aims to tackle this difficult problem via citizen science, promote AI safety research beyond the boundaries of the relatively small AI safety community, and encourage an interdisciplinary approach.
Why are we doing this workshop and challenge?
With race dynamics emerging, we believe we are still at a time where key stakeholders can effectively address the potential pitfalls.
Primary objective: find a solution to problems associated with the AI race
Secondary objective: develop a better understanding of race dynamics including issues of cooperation and competition, value propagation, value alignment and incentivisation. This knowledge can be used to shape the future of people, our team (or any team), and our partners. We can also learn to better align the value systems of members of our teams and alliances
It’s possible that through this process we won’t find an optimal solution, but a set of proposals that could move us a few steps closer to our goal.
General question: How can we avoid AI research becoming a race between researchers, developers, companies, governments and other stakeholders, where:
Safety gets neglected or established agreements are defied
The fruits of the technology are not shared by the majority of people to benefit humanity, but only by a selected few
At the workshop, we focused on:
Better understanding and mapping the AI race: answering questions (see below) and identifying other relevant questions
Designing the AI Race Avoidance round of the General AI Challenge (creating a timeline, discussing potential tasks and success criteria, and identifying possible areas of friction)
We are continually updating the list of AI race-related questions (see appendix), which will be addressed further in the General AI Challenge, future workshops and research.
Below are some of the main topics discussed at the workshop.
1) How can we better understand the race?
Create and understand frameworks for discussing and formalizing AI race questions
Identify the general principles behind the race. Study meta-patterns from other races in history to help identify areas that will need to be addressed
Use first-principle thinking to break down the problem into pieces and stimulate creative solutions
Define clear timelines for discussion and clarify the motivation of actors
Value propagation is key. Whoever wants to advance, needs to develop robust value propagation strategies
Resource allocation is also key to maximizing the likelihood of propagating one’s values
Detailed roadmaps with clear targets and open-ended roadmaps (where progress is not measured by how close the state is to the target) are both valuable tools to understanding the race and attempting to solve issues
Can simulation games be developed to better understand the race problem? Shahar Avin is in the process of developing a “Superintelligence mod” for the video game Civilization 5, and Frank Lantz of the NYU Game Center came up with a simple game where the user is an AI developing paperclips
2) Is the AI race really a negative thing?
Competition is natural and we find it in almost all areas of life. It can encourage actors to focus, and it lifts up the best solutions
The AI race itself could be seen as a useful stimulus
It is perhaps not desirable to “avoid” the AI race but rather to manage or guide it
Is compromise and consensus good? If actors over-compromise, the end result could be too diluted to make an impact, and not exactly what anyone wanted
Unjustified negative escalation in the media around the race could lead to unnecessarily stringent regulations
As we see race dynamics emerge, the key question is if the future will be aligned with most of humanity’s values. We must acknowledge that defining universal human values is challenging, considering that multiple viewpoints exist on even fundamental values such as human rights and privacy. This is a question that should be addressed before attempting to align AI with a set of values
3) Who are the actors and what are their roles?
Who is not part of the discussion yet? Who should be?
The people who will implement AI race mitigation policies and guidelines will be the people working on them right now
Military and big companies will be involved. Not because we necessarily want them to shape the future, but they are key stakeholders
Which existing research and development centers, governments, states, intergovernmental organizations, companies and even unknown players will be the most important?
What is the role of media in the AI race, how can they help and how can they damage progress?
Future generations should also be recognized as stakeholders who will be affected by decisions made today
Regulation can be viewed as an attempt to limit the future more intelligent or more powerful actors. Therefore, to avoid conflict, it’s important to make sure that any necessary regulations are well thought-through and beneficial for all actors
4) What are the incentives to cooperate on AI?
One of the exercises at the workshop was to analyze:
What are motivations of key stakeholders?
What are the levers they have to promote their goals?
What could be their incentives to cooperate with other actors?
One of the prerequisites for effective cooperation is a sufficient level of trust:
How do we define and measure trust?
How can we develop trust among all stakeholders — inside and outside the AI community?
Predictability is an important factor. Actors who are open about their value system, transparent in their goals and ways of achieving them, and who are consistent in their actions, have better chances of creating functional and lasting alliances.
5) How could the race unfold?
Workshop participants put forward multiple viewpoints on the nature of the AI race and a range of scenarios of how it might unfold.
As an example, below are two possible trajectories of the race to general AI:
Winner takes all: one dominant actor holds an AGI monopoly and is years ahead of everyone. This is likely to follow a path of transformative AGI (see diagram below).
Example: Similar technology advantages have played an important role in geopolitics in the past. For example, by 1900 Great Britain, with only 40 million people, managed to capitalise the advantage of technological innovation creating an empire of about one quarter of the Earth’s land and population [7].
Co-evolutionary development: many actors on similar level of R&D racing incrementally towards AGI.
Example: This direction would be similar to the first stage of space exploration when two actors (the Soviet Union and the United States) were developing and successfully putting in use a competing technology.
Other considerations:
We could enter a race towards incrementally more capable narrow AI (not a “winner takes all” scenario: grab AI talent)
We are in multiple races to have incremental leadership on different types of narrow AI. Therefore we need to be aware of different risks accompanying different races
The dynamics will be changing as different races evolve
The diagram below explores some of the potential pathways from the perspective of how the AI itself might look. It depicts beliefs about three possible directions that the development of AI may progress in. Roadmaps of assumptions of AI development, like this one, can be used to think of what steps we can take today to achieve a beneficial future even under adversarial conditions and different beliefs.
Legend:
Transformative AGI path: any AGI that will lead to dramatic and swift paradigm shifts in society. This is likely to be a “winner takes all” scenario.
Swiss Army Knife AGI path: a powerful (can be also decentralized) system made up of individual expert components, a collection of narrow AIs. Such AGI scenario could mean more balance of power in practice (each stakeholder will be controlling their domain of expertise, or components of the “knife”). This is likely to be a co-evolutionary path.
Narrow AI path: in this path, progress does not indicate proximity to AGI and it is likely to see companies racing to create the most powerful possible narrow AIs for various tasks.
Current race assumption in 2017
Assumption: We are in a race to incrementally more capable narrow AI (not a “winner takes all” scenario: grab AI talent)
Counter-assumption: We are in a race to “incremental” AGI (not a “winner takes all” scenario)
Counter-assumption: We are in a race to recursive AGI (winner takes all)
Counter-assumption: We are in multiple races to have incremental leadership on different types of “narrow” AI
Foreseeable future assumption
Assumption: At some point (possibly 15 years) we will enter a widely-recognised race to a “winner takes all” scenario of recursive AGI
Counter-assumption: In 15 years, we continue incremental (not a “winner takes all” scenario) race on narrow AI or non-recursive AGI
Counter-assumption: In 15 years, we enter a limited “winner takes all” race to certain narrow AI or non-recursive AGI capabilities
Counter-assumption: The overwhelming “winner takes all” is avoided by the total upper limit of available resources that support intelligence
Other assumptions and counter-assumptions of race to AGI
Assumption: Developing AGI will take a large, well-funded, infrastructure-heavy project
Counter-assumption: A few key insights will be critical and they could come from small groups. For example, Google Search which was not invented inside a well known established company but started from scratch and revolutionized the landscape
Counter-assumption: Small groups can also layer key insights onto existing work of bigger groups
Assumption: AI/AGI will require large datasets and other limiting factors
Counter-assumption: AGI will be able to learn from real and virtual environments and a small number of examples the same way humans can
Assumption: AGI and its creators will be easily controlled by limitations on money, political leverage and other factors
Counter-assumption: AGI can be used to generate money on the stock market
Assumption: Recursive improvement will proceed linearly/diminishing returns (e.g. learning to learn by gradient descent by gradient descent)
Counter-assumption: At a certain point in generality and cognitive capability, recursive self-improvement may begin to improve more quickly than linearly, precipitating an “intelligence explosion”
Assumption: Researcher talent will be key limiting factor in AGI development
Counter-assumption: Government involvement, funding, infrastructure, computational resources and leverage are all also potential limiting factors
Assumption: AGI will be a singular broad-intelligence agent
Counter-assumption: AGI will be a set of modular components (each limited/narrow) but capable of generality in combination
Counter-assumption: AGI will be an even wider set of technological capabilities than the above
6) Why search for AI race solution publicly?
Transparency allows everyone to learn about the topic, nothing is hidden. This leads to more trust
Inclusion — all people from across different disciplines are encouraged to get involved because it’s relevant to every person alive
If the race is taking place, we won’t achieve anything by not discussing it, especially if the aim is to ensure a beneficial future for everyone
Fear of an immediate threat is a big motivator to get people to act. However, behavioral psychology tells us that in the long term a more positive approach may work best to motivate stakeholders. Positive public discussion can also help avoid fearmongering in the media.
7) What future do we want?
Consensus might be hard to find and also might not be practical or desirable
AI race mitigation is basically an insurance. A way to avoid unhappy futures (this may be easier than maximization of all happy futures)
Even those who think they will be a winner may end up second, and thus it’s beneficial for them to consider the race dynamics
In the future it is desirable to avoid the “winner takes all” scenario and make it possible for more than one actor to survive and utilize AI (or in other words, it needs to be okay to come second in the race or not to win at all)
One way to describe a desired future is where the happiness of each next generation is greater than the happiness of a previous generation
We are aiming to create a better future and make sure AI is used to improve the lives of as many people as possible [8]. However, it is difficult to envisage exactly what this future will look like.
One way of envisioning this could be to use a “veil of ignorance” thought experiment. If all the stakeholders involved in developing transformative AI assume they will not be the first to create it, or that they would not be involved at all, they are likely to create rules and regulations which are beneficial to humanity as a whole, rather than be blinded by their own self interest.
AI Race Avoidance challenge
In the workshop we discussed the next steps for Round 2 of the General AI Challenge.
About the AI Race Avoidance round
Although this post has used the title AI Race Avoidance, it is likely to change. As discussed above, we are not proposing to avoid the race but rather to guide, manage or mitigate the pitfalls. We will be working on a better title with our partners before the release.
The round has been postponed until 18 January 2018. The extra time allows more partners, and the public, to get involved in the design of the round to make it as comprehensive as possible.
The aim of the round is to raise awareness, discuss the topic, get as diverse an idea pool as possible and hopefully to find a solution or a set of solutions.
Submissions
The round is expected to run for several months, and can be repeated
Desired outcome: next-steps or essays, proposed solutions or frameworks for analyzing AI race questions
Submissions could be very open-ended
Submissions can include meta-solutions, ideas for future rounds, frameworks, convergent or open-ended roadmaps with various level of detail
Submissions must have a two page summary and, if needed, a longer/unlimited submission
No limit on number of submissions per participant
Judges and evaluation
We are actively trying to ensure diversity on our judging panel. We believe it is important to have people from different cultures, backgrounds, genders and industries representing a diverse range of ideas and values
The panel will judge the submissions on how they are maximizing the chances of a positive future for humanity
Specifications of this round are work in progress
Next steps
Prepare for the launch of AI Race Avoidance round of the General AI Challenge in cooperation with our partners on 18 January 2018
Continue organizing workshops on AI race topics with participation of various international stakeholders
Promote cooperation: focus on establishing and strengthening trust among the stakeholders across the globe. Transparency in goals facilitates trust. Just like we would trust an AI system if its decision making is transparent and predictable, the same applies to humans
Call to action
At GoodAI we are open to new ideas about how AI Race Avoidance round of the General AI Challenge should look. We would love to hear from you if you have any suggestions on how the round should be structured, or if you think we have missed any important questions on our list below.
In the meantime we would be grateful if you could share the news about this upcoming round of the General AI Challenge with anyone you think might be interested.
Appendix
More questions about the AI race
Below is a list of some more of the key questions we will expect to see tackled in Round 2: AI Race Avoidance of the General AI Challenge. We have split them into three categories: Incentive to cooperate, What to do today, and Safety and security.
Incentive to cooperate:
How to incentivise the AI race winner to obey any related previous agreements and/or share the benefits of transformative AI with others?
What is the incentive to enter and stay in an alliance?
We understand that cooperation is important in moving forward safely. However, what if other actors do not understand its importance, or refuse to cooperate? How can we guarantee a safe future if there are unknown non-cooperators?
Looking at the problems across different scales, the pain points are similar even at the level of internal team dynamics. We need to invent robust mechanisms for cooperation between: individual team members, teams, companies, corporations and governments. How do we do this?
When considering various incentives for safety-focused development, we need to find a robust incentive (or a combination of such) that would push even unknown actors towards beneficial AGI, or at least an AGI that can be controlled. How?
What to do today:
How to reduce the danger of regulation over-shooting and unreasonable political control?
What role might states have in the future economy and which strategies are they assuming/can assume today, in terms of their involvement in AI or AGI development?
With regards to the AI weapons race, is a ban on autonomous weapons a good idea? What if other parties don’t follow the ban?
If regulation overshoots by creating unacceptable conditions for regulated actors, the actors may decide to ignore the regulation and bear the risk of potential penalties. For example, total prohibition of alcohol or gambling may lead to displacement of the activities to illegal areas, while well designed regulation can actually help reduce the most negative impacts such as developing addiction.
AI safety research needs to be promoted beyond the boundaries of the small AI safety community and tackled interdisciplinarily. There needs to be active cooperation between safety experts, industry leaders and states to avoid negative scenarios. How?
Safety and security:
What level of transparency is optimal and how do we demonstrate transparency?
Impact of openness: how open shall we be in publishing “solutions” to the AI race?
How do we stop the first developers of AGI becoming a target?
How can we safeguard against malignant use of AI or AGI?
Related questions
What is the profile of a developer who can solve general AI?
Who is a bigger danger: people or AI?
How would the AI race winner use the newly gained power to dominate existing structures? Will they have a reason to interact with them at all?
Universal basic income?
Is there something beyond intelligence? Intelligence 2.0
End-game: convergence or open-ended?
What would an AGI creator desire, given the possibility of building an AGI within one month/year?
Are there any goods or services that an AGI creator would need immediately after building an AGI system?
What might be the goals of AGI creators?
What are the possibilities of those that develop AGI first without the world knowing?
What are the possibilities of those that develop AGI first while engaged in sharing their research/results?
What would make an AGI creator share their results, despite having the capability of mass destruction (e.g. Internet paralysis) (The developer’s intentions might not be evil, but his defense to “nationalization” might logically be a show of force)
Are we capable of creating such a model of cooperation in which the creator of an AGI would reap the most benefits, while at the same time be protected from others? Does a scenario exist in which a software developer monetarily benefits from free distribution of their software?
How to prevent usurpation of AGI by governments and armies? (i.e. an attempt at exclusive ownership)
The group will bring together members of the security industry, end users, technology experts and other interested parties to promote best practices regarding the use of robots in security
In the lead up to the finals of the Robot Launch 2017 competition on December 14, we’re having one round of public voting for your favorite startup from the Top 25. While in previous years we’ve had public voting for all the startups, running alongside the investor judging, this year it’s an opt-in, because many of the startups seeking investment are not yet ready to publicize. Each year the startups get better and better, so we can’t wait to see who you think is the best! Make sure you vote for your favorite – below – by 6pm PST, 10 December and spread the word through social media using #robotlaunch2017.
BotsAndUs believe in humans and robots collaborating towards a better life. Our aim is to create physical and emotional comfort with robots to support wide adoption.
In May ‘17 we launched Bo, a social robot for events, hospitality and retail. Bo approaches you in shops, hotels or hospitals, finds out what you need, takes you to it and gives you tips on the latest bargains.
In a short time the business has grown considerably: global brands as customers (British Telecom, Etisalat, Dixons), a Government award for our Human-Robot-Interaction tech, members of Nvidia’s Inception program and intuAccelerate (bringing Bo to UK’s top 10 malls), >15k Bo interactions.
C2RO (Collaborative Cloud Robotics) has developed a cloud-based software platform that uses real-time data processing technologies to provide AI-enabled solutions for robots. It dramatically augments the perceptive, cognitive and collaborative abilities of robots with a software-only solution that is portable to any cloud environment. C2RO is releasing it’s Beta offering in November 2017, has over 40 organizations signed up for early access, and is currently working with 4 lead customers on HW integrations and joint marketing.
Kinema Systems has developed Kinema Pick, the world’s first deep-learning based 3D Vision system for robotic picking tasks in logistics and manufacturing. Kinema Pick is used for picking boxes off pallets onto conveyors with little a-priori knowledge of the types of boxes and their arrangement on the pallet. Kinema Pick requires minimal training for new boxes. Kinema Pick uses 3D workcell information and motion planning to be self-driving, requiring no programming for new workcells. The founders and employees of Kinema Pick include veterans of Willow Garage, SRI, Apple and KTH who created MoveIt!, ROS-Control, SimTrack and other open-source packages used by thousands of companies, researchers and start-ups around the world.
The future is here. Mothership’s solar powered airship will enable robotic aerial persistence by serving as a charging/docking station and communications hub for drones. This enables not only a globally connected logistical network with 1 hour delivery on any product or service but also flying charging stations for flying cars. Imagine a Tesla supercharger network in the sky.
Our first stepping stone to this future is a solar powered airship for long range aerial data collection to tackle the troublesome linear infrastructure inspection market.
A vote for mothership is a vote for the Jetsons future we were promised.
Northstar Robotics is an agricultural technology company that was founded by an experienced farmer and robotics engineer.
Our vision is to create the fully autonomous farm which will address the labour shortage problem and lower farm input costs. We will make this vision a reality by first providing an open hardware and software platform to allow current farm equipment to become autonomous. In parallel, we are going to build super awesome robots that will transform farming and set the standard for what modern agricultural equipment should be.
BLKTATU Autonomous drone delivery platform using computer vision allowing deliveries to hard to reach places like highrise buildings and apartments. We deliver to where you are, autonomously.
Tennibot is the world’s first autonomous ball collector. It perfectly integrates computer vision and robotics to offer tennis players and coaches an innovative solution to a tedious task: picking up balls during practice. The Tennibot saves valuable time that is currently wasted bending over for balls. It allows the user to focus on hitting and let the robot take care of the hard work. Tennibot stays out of the way of players and works silently in an area specified by the user. It also comes with a companion app that gives the user full control of their personal ball boy.
UniExo aims to help people with injuries and movement problems to restore the motor functions of their bodies with modular robotic exoskeleton devices, without additional help of doctors.
Thanks to our device, with its advantages, we can help these users in rehabilitation. The use of the product provides free movement for people with disabilities in a comfortable and safe form for them, without the use of outside help, as well as people in the post-opined period, post-traumatic state, being on rehabilitation.
We can give a second chance to people for a normal life, and motivate to do things for our world that can help other people.
Woobo unfolds a world of imagination, fun, and knowledge to children, bringing the magic of a robot companion into children’s life. Relying on cutting-edge robotics and AI technologies, our team is aiming to realize the dream of millions of children – bringing them a fluffy and soft buddy that can talk to them, amuse them, inspire them, and learn along with them. For parents, Woobo is an intelligent assistant with customized content that can help entertain, educate, and engage children, as well as further strengthen the parent-child bond.
That’s right! You better not run, you better not hide, you better watch out for brand new robot holiday videos on Robohub! Drop your submissions down our chimney at editors@robohub.org and share the spirit of the season, like these vids-of-Christmas-past. . .
Enabling robots to act autonomously in the real-world is difficult. Really, really difficult. Even with expensive robots and teams of world-class researchers, robots still have difficulty autonomously navigating and interacting in complex, unstructured environments.
Fig 1. A learned neural network dynamics model enables a hexapod robot to learn to run and follow desired trajectories, using just 17 minutes of real-world experience.
Why are autonomous robots not out in the world among us? Engineering systems that can cope with all the complexities of our world is hard. From nonlinear dynamics and partial observability to unpredictable terrain and sensor malfunctions, robots are particularly susceptible to Murphy’s law: everything that can go wrong, will go wrong. Instead of fighting Murphy’s law by coding each possible scenario that our robots may encounter, we could instead choose to embrace this possibility for failure, and enable our robots to learn from it. Learning control strategies from experience is advantageous because, unlike hand-engineered controllers, learned controllers can adapt and improve with more data. Therefore, when presented with a scenario in which everything does go wrong, although the robot will still fail, the learned controller will hopefully correct its mistake the next time it is presented with a similar scenario. In order to deal with complexities of tasks in the real world, current learning-based methods often use deep neural networks, which are powerful but not data efficient: These trial-and-error based learners will most often still fail a second time, and a third time, and often thousands to millions of times. The sample inefficiency of modern deep reinforcement learning methods is one of the main bottlenecks to leveraging learning-based methods in the real-world.
We have been investigating sample-efficient learning-based approaches with neural networks for robot control. For complex and contact-rich simulated robots, as well as real-world robots (Fig. 1), our approach is able to learn locomotion skills of trajectory-following using only minutes of data collected from the robot randomly acting in the environment. In this blog post, we’ll provide an overview of our approach and results. More details can be found in our research papers listed at the bottom of this post, including this paper with code here.
In the 2nd grand challenge, CMU’s Highlander was a favourite and doing very well. Mid-race it started losing engine power and it stalled for long enough that Stanford’s Stanley beat it by 11 minutes.
It was discovered recently a small computerized fuel injector controller in the Hummer (one of only two) may have been damaged in a roll-over that Highlander had, and if you pressed on it, the engine would reduce power or fail.
People have wondered how the robocar world might be different if they had not had that flaw. Stanford’s victory was a great boost for their team, and Sebastian Thrun was hired to start Google’s car team — but Chris Urmson, lead on Highlander, was also hired to lead engineering, and Chris would end up staying on the project for much longer than Sebastian who got seduced by the idea of doing Udacity. Google was much more likely to have closer ties to Stanford people anyway, being where it is.
CMU’s fortunes might have ended up better, but they managed to be the main source of Uber’s first team.
There are many stories of small things making a big difference. Also well known is how Anthony Levandowski, who entered a motorcycle in the race, forgot to turn on a stabilizer. The motorcycle fell over 2 seconds after he released it, dashing all of his team’s work. Anthony of course did OK (as another leader on the Google team, and then to Uber) but of course has recently had some “trouble”.
Another famous incident came when Volvo was doing a demo for press of their collision avoidance system. You could not pick a worse time for a failure, and of course there is video of it.
They had tested the demo extensively the night before. In fact they tested it too much, and left a battery connected during the night, so that it was drained by the morning when they showed off to the press.
These stories remind people of all the ways things go wrong. More to the point, they remind us that we must design expecting things to go wrong, and have systems that are able to handle that. These early demos and prototypes didn’t have that, but cars that go on the road do and will.
Making systems resilient is the only answer when they get as complex as they are. Early car computers were pretty simple, but a self-driving system is so complex that it is never going to be formally verified or perfect. Instead, it must be expected that every part will fail, and the failure of every part — or even every combination of parts — should be tested in both simulation, and where possible in reality. What is tested is how the rest of the system handles the failure, and if it doesn’t handle it, that has to be fixed.
It does not need to handle it perfectly, though. For example, in many cases the answer to failure will be, “We’re at a reduced safety level. Let’s get off the road, and summon another car to help the passengers continue on their way.”
It might even be a severely reduced safety level. Possibly even, as hard as this number may be to accept, 100 times less safe! That’s because the car will never drive very far in that degraded condition. Consider a car that has one incident every million miles. In degraded condition, it might have an incident every 10,000 miles. You clearly won’t drive home in that condition, but the 1/4 mile of driving at degraded level is as risky as 25 miles of ordinary driving at full operational level, which is a risk taken every day. As long as vehicles do not drive more than a short distance at this degraded level, the overall safety record should still be satisfactory.
Of course, if the safety level degrades to a level that could be called “dangerous” rather than “less safe” that’s another story. That must never be allowed.
An example of this would be failure of the main sensors, such as a LIDAR. Without a LIDAR, a car would rely on cameras and radar. Companies like Tesla think they can make a car fully safe with just those two, and perhaps they will some day. But even though those are not yet safe enough, they are safe enough for a problem like getting off the road, or even getting to the next exit on a highway.
This is important because we will never get perfection. We will only get lower and lower levels of risk, and the risk will not be constant — it will be changing with road conditions, and due to system or mechanical failures. But we can still get the safety level we want — and get the technology on the road.
The Humanoids 2017 conference earlier this month hosted an excellent photo competition. I was lucky to be one of the judges, along with Erico Guizzo from IEEE Spectrum, and Giorgio Metta as awards chair.
The decision, which was tough given the excellent submissions, was based on social media votes and scores for originality, creativity, photo structure, and tech or fun factor.
The overall winner for Best Humanoid Photo featured a pensive iCub and was entitled “To be, or not to be” by Pedro Vicente from the Vislab in Lisbon.
Finalists, in no particular order, were:
The winner for Best Funny Humanoid was this picture of a frustrated SABIAN entitled “If only I had a self-driving car” by Marco Moscato at the Biorobotics Institute, Scuola Superiore Sant’Anna.
Finalists, in no particular order, were:
You can see all the other photos below. Congratulations to all the participants, and to the Humanoids 2017 team for the organisation!
The European Robotics Week 2017 (ERW2017) Central Event organised in Brussels saw the “Robots Discovery” exhibition hosted by the European Committee of the Regions on 20-23 November, where robotics experts from 30 European and regionally-funded projects outlined the impact of their work on society.
The exhibiting projects showed robots assisting during surgery or providing support for elderly care, how robots can help students develop digital skills, monitor the environment and apply agricultural chemicals with precision and less waste or how they can save lives after disasters. The #ERW2017 hashtag has reached over 1 million impressions on social media. Here’s a look at how the “Robots Discovery” central event was portrayed.
The day ended with a reception hosted by First Vice-President of the European Committee of the Regions, Markku Markkula, and a concert by the Logos Robots Orchestra.
The day ended with a high-level dinner hosted by MEP Martina Werner at the European Parliament.
Honoured to attend the euRobotics / SPARC high-level VIP dinner in the European Parliament during the European Robotics Week #ERW2017pic.twitter.com/yqN3VeoUa7
There’s a scare-tactic video going around on social media, and I wanted to weigh in on it—this particular video has gone from 500,000 views to almost 2 million in the past 10 days. As a matter of principle, I will not link to it. It presents a scary future in which killer robotic drones—controlled by any terrorist organization or government—run rampant.
The twin issues of killer robots and robots taking our jobs are the result of the two-edged blade of new technology, i.e., technologies that can be used for both good and evil. Should these new technologies be stopped entirely or regulated? Can they be regulated? Once you see a video like this one, one doubts whether they can ever be controlled. It’s fearful media that doesn’t say it is fake until far beyond the irresponsible level.
Videos like this one—and there are many—are produced for multiple purposes. The issues often get lost to the drama of the message. They are the result of, or fueled by, headline-hungry news sources, social media types and commercial and political strategists. This particular shock video—fake as it is—is promoting a longer, more balanced documentary and non-profit organization on the subject of stopping autonomous killing machines. Yet there are other factual videos of the U.S. military’s Perdix drones swarming just like in the shock video. Worse still, the same technologists that teach future roboticists at MIT are also developing those Perdix drones and their swarming capabilities.
My earlier career was in political strategy and I know something about the tactics of fear and manipulation—of raising doubts for manipulative purposes, as well as the real need for technologies to equalize the playing field. Again, the two-edged sword.
At the present time, we are under very real threat militarily and from the cyber world. We must invest in countering those threats and inventing new preventative weaponry. Non-militarily, jobs ARE under threat—particularly the dull, dirty and dangerous (DDD) ones easily replaced by robots and automation. In today’s global and competitive world, DDD jobs are being replaced because they are costly and inefficient. But they are also being replaced without too much consideration for those displaced.
It’s hard for me as an investor and observer (and in the past as a hands-on participant) to reconcile what I know about the state of robotics, automation and artificial intelligence today with the future use of those very same technologies.
I see the speed of change, e.g.: for many years, Google has had thousands of coders coding their self-driving system and compiling the relevant and necessary databases and models. But along comes George Hotz and other super-coders who single-handedly write code that writes code to accomplish the same thing. Code that writes code is what Elon Musk and Stephen Hawking fear, yet it is inevitable and soon will be commonplace. Ray Kurzweil named this phenomenon and claims that the ‘singularity’ will happen by 2045 with an interim milestone in 2029 when AI will achieve human levels of intelligence. Kurzweil’s forecasts, predicated on exponential technological growth, is clearly evident in the Google/Hotz example.
Pundits and experts suggest that when machines become smarter than human beings, they’ll take over the world. Kurzweil doesn’t think so. He envisions the same technology that will make AIs more intelligent giving humans a boost as well. It’s back to the two-edged sword of good and evil.
In my case, as a responsible writer and editor covering robotics, automation and artificial intelligence, I think it’s important to stay on topic, not fan the flames of fear, and to present the positive side of the sword.
Uber and Volvo announced an agreement where Uber will buy, in time, up to 24,000 specially built Volvo XC90s which will run Uber’s self-driving software and, presumably, offer rides to Uber customers. While the rides are some time away, people have made note of this for several reasons.
This is a pretty big order for Volvo — it’s $1B of cars at the retail price, and 1/3 of the total sales of XC90s in 2017.
This is a big fleet — there are only 12,000 yellow cabs in New York City, for example, though thanks to Uber there are now far more hailable vehicles.
In spite of Volvo’s fairly major software efforts, they will be entirely on the hardware side for this deal, and it is not exclusive for either party.
I’m not clear who originally said it — I first heard it from Marc Andreesen — but “the truest form of a partnership is called a purchase order.” In spite of the scores of announced partnerships and joint ventures announced to get PR in the robocar space, this is a big deal, but it’s a sign of the sort of deal car makers have been afraid of. Volvo will be primarily a contract manufacturer here, and Uber will own the special sauce that makes the vehicle work, and it will own the customer. You want to be Uber in this deal, But what company can refuse a $1B order?
It also represents a big shift for Uber. Uber is often the poster child for the company that replaced assets with software. It owns no cars and yet provides the most rides. Now, Uber is going to move to the capital intensive model of owning the cars, and not having to pay drivers. There will be much debate over whether it should make such a shift. As noted, it goes against everything Uber represented in the past, but there is not really much choice.
First of all, to do things the “Uber” way would require that a large number of independent parties bought and operated robocars and then contracted out to Uber to bring them riders when not being used by their owners. Like UberX without having to drive the car. The problem is, that world is still a long way away. Car companies have put their focus on cars that can’t drive unmanned — much or at all — because that’s OK for the private car buyer. They are also far behind companies like Waymo and Uber in producing taxi capable vehicles.
If Uber waited for the pool of available private cars to get large enough, it would miss the boat. Other companies would have moved into its territory and undercut it with cheaper and cooler robotaxi service.
Secondly, you really want to be very sure about the vehicles you deploy in your first round. You want to have tested them, and you need to certify their safety because you are going to be liable in accidents no matter what you do. You can get the private owners to sign a contract taking liability but you will get sued anyway as the deep pocket if you do. This means you want to control the whole experience.
The truth is, capital is pretty cheap for companies like Uber. Even cheaper for companies like Apple and Google that have the world’s largest pools of spare capital sitting around. The main risk is that these custom robocars may not have any resale value if you bet wrong on how to build them. Fortunately, taxis wear out in about 5 years of heavy use.
Uber continues to have no fear of telling the millions of drivers who work “for” them that they will be rid of them some day. Uber driver is an unusual job, and nobody thinks of it as a career, so they can get away with this.
Trolley problem gets scarier
Academic ethicists, when defending discussions of the Trolley Problem claim that while they understand the problems are not real, they are still valuable teaching tools to examine real questions.
The problem is the public doesn’t understand this, and is morbidly fascinated beyond all rationality with the idea of machines deciding who lives or dies. This has led Barack Obama to ask about it in his first statement on robocars, and many other declarations that we must figure out this nonsense question before we deploy robocars on the road. The now-revoked proposed NHTSA guidelines of 2016 included a theoretically voluntary requirement that vendors outline their solutions to this “problem.”
This almost got more real last week when a proposed UK bill would have demanded trolley solutions. The bill was amended at the last minute, and a bullet dodged that would have delayed the deployment of life saving technology while trying to resolve truly academic questions.
It is time for ethical ethicists to renounce the trolley problem. Even if, inside, they still think it’s got value, that value is far outweighed by the irrational fears and actions it triggers in public debate. Real people are dying every day on the roads, and we should not delay saving them to figure out how to do the “right” thing in hypothetical situations that are actually extremely rare to nonexistent. Figuring out the right thing is the wrong thing. Save solving trolley problems for version 4, and get to work on version 0.2.
There is real ethical work to be done, covering situations that happen every day. Real world safety tradeoffs and their morality. Driving on roads where breaking the vehicle code is the norm. Contrasting cost with safety. These are the places where ethical expertise can be valuable.
This is good to see, but I hope the two simulators also work together. One real strength of an open platform simulator is that people all around the world can contribute scenarios to it, and then every car developer can test their system in those scenarios. We want every car tested in every scenario that anybody can think of.
Waymo has developed its own simulator, and fed it with every strange thing their cars have encountered in 5M kilometers of real world driving. It’s one of the things that gives them an edge. They’ve also loaded the simulator with everything their team members can think of. This way, their driving system has the experience of seeing and trying out every odd situation that will be encountered in many lifetimes of human driving, and eventually on every type of road.
That’s great, but no one company can really build it all. This is one of the great things to crowdsource. Let all the small developers, all the academics, and even all the hobbyists build simulations of dangerous scenarios. Let people record and build scenarios for driving in every city of the world, in every situation. No one company can do that but the crowd can. This can give us the confidence that any car has at least at some level encountered far more than any human driver ever could, and handled it well.
Unusual liability rule
Some auto vendors have proposed a liability rule for privately owned robocars that will protect them from some liability. The rule would declare that if you bought a robocar from them, and you didn’t maintain it according to the required maintenance schedule then the car vendor would not be liable for any accident it had.
It’s easy to see why automakers would want this rule. They are scared of liability and anything that can reduce it is a plus for them.
At the same time, this will often not make sense. Just because somebody didn’t change the oil or rotate the tires should not remove liability for a mistake by the driving system that had no relation to those factors.
What’s particularly odd here is that robocars should always be very well maintained. That’s because they will be full of sensors to measure everything that’s going on, and they will also be able to constantly test every system that can be tested.
Consider the brakes, for example. Every time a robocar brakes, it can measure that the braking is happening correctly. It can measure the temperature of the brake discs. It can listen to the sound or detect vibrations. It can even, when unmanned, find itself on an empty street and hit the brakes hard to see what happens.
In other words, unexpected brake failure should be close to impossible (particularly since robocars are being designed with 2 or 3 redundant braking systems.)
More to the point, a robocar will take itself in for service. When your car is not being used, it will run over for an oil change or any other maintenance it needs. You would have to deliberately stop it to prevent it from being maintained to schedule. Certainly no car in a taxi fleet will remain unmaintained except through deliberate negligence.