Page 413 of 427
1 411 412 413 414 415 427

Planning for hurricanes and other disasters with robocars


How will robocars fare in a disaster, like Harvey in Houston, Irma, or the tsunamis in Japan or Indonesia, or a big Earthquake, or a fire, or 9/11, or a war?

These are very complex questions, and certainly most teams developing cars have not spent a lot of time on solutions to them at present. Indeed, I expect that these will not be solved issues until after the first significant pilot projects are deployed, because as long as robocars are a small fraction of the car population, they will not have that much effect on how things go. Some people who have given up car ownership for robocars — not that many in the early days — will possibly find themselves hunting for transportation the way other people who don’t own cars do today.

It’s a different story when, perhaps a decade from now, we get significant numbers of people who don’t own cars and rely on robocar transportation. That means people who don’t have any cars, not the larger number of people who have dropped from 2 cars to 1 thanks to robocar services.

I addressed a few of these questions before regarding Tsunamis and Earthquakes.

A few key questions should be addressed:

  1. How will the car fleets deal with massively increased demand during evacuations and flight during an emergency?
  2. How will the cars deal with shutdown and overload of the mobile data networks, if it happens?
  3. How will cars deal with things like floods, storms, earthquakes and more which block roads or make travel unsafe on certain roads?


Most of these issues revolve around fleets. Privately owned robocars will tend to have steering wheels and be usable as regular cars, and so only improve the situation. If they encounter unsafe roads, they will ask their passengers for guidance, or full driving. (However, in a few decades, their passengers may no longer be very capable at driving but the car will handle the hard parts and leave them just to provide video-game style directions.)

Increased demand

An immediately positive thing is the potential ability for private robocars to, once they have taken their owners to safety, drive back into the evacuation zone as temporary fleet cars, and fetch other people, starting with those selected by the car’s owner, but also members of the public needing assistance. This should dramatically increase the ability of the car fleet to get people moved.

Nonetheless, it is often noted that in a robocar taxi world, there don’t need to be nearly as many cars in a city as we have today. With ideal efficiency, there would be exactly enough seats to handle the annual peak, but few more. We might drop to just 1/4 of the cars, and we might also have many of them be only 1 or 2 seater cars. There will be far fewer SUVs, pickup trucks, minivans and other large cars, because we don’t really need nearly as many as we have today.

To counter this, mandatory carpooling may become required. This will be fought because it means you don’t get to take all the physical stuff you want to bring in the event of something like flooding. Worse, we might see conflict between people wanting to bring pets (in carriers) which could take seats which might be used by people. In a very urgent situation, we could see an order coming down requiring pets to be left behind lest people be left behind. Once it’s clear all the people will make it out, people or rescue workers could go back for the pets, but that’s not very tenable.

One solution, if there is enough warning of the disaster (as there is for storms but not for other disasters) is for cars from outside the region to be pressed into service. There may be millions of cars within 1-2 hours drive of a disaster zone, allowing a major increase in capacity within a short time. This would include private cars as well as taxi fleets from other cities. Those cities would face a reduction in service and more carpooling. As I write this, Irma is heading to Florida but it is uncertain where. Fleets of cars could, in such a situation, be on their way from other states, distributing themselves along the probable path, and improving the position as the path is better known. They could be ready to do a very quick mass evacuation once the forecast is certain, and return home if nothing happens. For efficiency they could also drive themselves to be placed on car carriers and trains.

Disaster management agencies will need to build tools that calculate how many people need to be moved, and what capacity exists to move them. This will let them calculate how much excess capacity is there to move pets and household possessions.

If there are robotic transit vehicles they could help a lot. Cars might ferry people from homes to stations where robotic buses (including those from other cities, and human driven buses) could carry lots of people. Then they could return, with no risk to a driver.

The last hopeful item is the ability to do better traffic management. That’s an issue in any disaster, as people will resist following rules. If people get used to the idea of line direction reassignment, it can do a lot here. Major routes could be changed to have only one lane going towards the disaster area. That lane would be robocar or emergency vehicle only. The next lane would be going out of the disaster area, and it would be strictly robocar only. The robocars could safety drive in opposite directions at high speed without a barrier, and they could provide a buffer for the human driven cars in all the other lanes. One simple solution might be to have the inbound lanes converted to robocar only, with allocation of lanes based on traffic demand. The outbound lanes would remain outbound and have a mix of cars. The buses would use the robocar lanes for a congestion free quick trip out and in.

Saving cars

It is estimated that as many as a million cars were destroyed by flooding in Harvey. Fortunately, robocars could be told to wake up and leave, even if not pressed into service for evacuation. With good knowledge of the topography of the land, they can also calculate the depth of water by looking at the shape of a flood, and never drive where they could get stuck. Few cars would be destroyed by the flood.

Loss of data

We’ve seen data networks fail. Cars need the data networks to get orders to travel to destinations to pick people up. They also want updates on road conditions, closures, problems and reallocations.

This is one of the few places where DSRC could help, as it would not depend on the mobile data networks. But it’s not enough to justify this rare purpose, and mesh networking is not currently in its design. It is probably more effective to build tools to keep the mobile data networks up, such as a network of emergency cell towers mounted in robotic trucks (and boats and planes?) that could travel quickly to provide limited service, for use by vehicles and for emergency communications only. Keep people to text messages and the networks have tons of capacity.

Existing cell towers could also be hardened, to have at least enough backup power for an evacuation, if not for a long disaster.

Roads changed by disasters

You can probably imagine 1,000 strange things that could happen during a disaster. Flooded streets. Trees and powerlines down. Buildings collapsed. Cracks in the road. Washed out bridges. Approaching tsunamis. High winds. Crashed cars.

The good thing is, if you can imagine it, so can the teams building test systems for robocars. They are building simulators to play out every strange situation they can think of, or that they’ve ever encountered in many human lifetimes of real driving on the road. Every car then gets tested to see what it will do in those strange situations, and 1,000 variations of each situation.

Cars could know the lay of the land, so that they could predict how deep water is or where flooding will go. They could know where the high ground is and how to get there without going over the low ground. If the data networks are up, they could get information in real time on road problems and disaster situations. One car might run into trouble, but after that, no other car will go that way if they shouldn’t. This even applies to traffic, something we already do with tools like Waze.

War

War is one of the most difficult challenges. Roads will be blocked or changed. Certainly places will be extremely dangerous and must not be visited. Checkpoints will be created that you must stop for. Communications networks will be compromised. Parties may be attempting to break into your car to take it over and turn it into a weapon against you or others. Insurgents may be modifying robocars and even ordinary drive-by-wire cars to turn them into bomb delivery systems. Cars or streets may come under active attack from snipers, artillery, grenade throwers and more. In the most extreme case, a nuclear weapon or chemical weapon might be used.

The military wants autonomous vehicles. It wants them to move cargo in dangerous but not active war zones, and it wants them for battle. It will be dealing with all these problems, but there is no clear path from their plans to civilian vehicles. Most civilian developers will not consider war situations very heavily until they start wanting to sell cars for use in conflict zones. At first the primarily solution will be to have a steering wheel to allow manual control. The second approach will be what I call “video game mode” where you can drive the car with a video game controller. It will take charge of not hitting things, you will just tell it where to go — what turns to make, what side of an obstacle to drive on, and most scare of all, to override its own sensors which believe it can’t go forward because of an obstacle.

In a conflict zone, communications will become very suspect and unreliable. No operations can depend on communications, and all communications should be seen as possible vectors for attack. At the same time you need data about places to avoid — and I mean really avoid. This problem needs lots more thought, and for now, I don’t know of people thinking about robotaxi service in conflict zones.

AVOIDING THE PRECIPICE

Race Avoidance in the Development of Artificial General Intelligence

Olga Afanasjeva, Jan Feyereisl, Marek Havrda, Martin Holec, Seán Ó hÉigeartaigh, Martin Poliak

SUMMARY
◦ Promising strides are being made in research towards artificial general intelligence systems. This progress might lead to an apparent winner-takes-all race for AGI.
◦ Concerns have been raised that such a race could create incentives to skimp on safety and to defy established agreements between key players.
◦ The AI Roadmap Institute held a workshop to begin interdisciplinary discussion on how to avoid scenarios where such dangerous race could occur.
◦ The focus was on scoping the problem, defining relevant actors, and visualizing possible scenarios of the AI race through example roadmaps.
◦ The workshop was the first step in preparation for the AI Race Avoidance round of the General AI Challenge that aims to tackle this difficult problem via citizen science and promote AI safety research beyond the boundaries of the small AI safety community.

Scoping the problem

With the advent of artificial intelligence (AI) in most areas of our lives, the stakes are increasingly becoming higher at every level. Investments into companies developing machine intelligence applications are reaching astronomical amounts. Despite the rather narrow focus of most existing AI technologies, the extreme competition is real and it directly impacts the distribution of researchers among research institutions and private enterprises.

With the goal of artificial general intelligence (AGI) in sight, the competition on many fronts will become acute with potentially severe consequences regarding the safety of AGI.

The first general AI system will be disruptive and transformative. First-mover advantage will be decisive in determining the winner of the race due to the expected exponential growth in capabilities of the system and subsequent difficulty of other parties to catch up. There is a chance that lengthy and tedious AI safety work ceases being a priority when the race is on. The risk of AI-related disaster increases when developers do not devote the attention and resources to safety of such a powerful system [1].

Once this Pandora’s box is opened, it will be hard to close. We have to act before this happens and hence the question we would like to address is:

How can we avoid general AI research becoming a race between researchers, developers and companies, where AI safety gets neglected in favor of faster deployment of powerful, but unsafe general AI?

Motivation for this post

As a community of AI developers, we should strive to avoid the AI race. Some work has been done on this topic in the past [1,2,3,4,5], but the problem is largely unsolved. We need to focus the efforts of the community to tackle this issue and avoid a potentially catastrophic scenario in which developers race towards the first general AI system while sacrificing safety of humankind and their own.

This post marks “step 0” that we have taken to tackle the issue. It summarizes the outcomes of a workshop held by the AI Roadmap Institute on 29th May 2017, at GoodAI head office in Prague, with the participation of Seán Ó hÉigeartaigh (CSER), Marek Havrda, Olga Afanasjeva, Martin Poliak (GoodAI), Martin Holec (KISK MUNI) and Jan Feyereisl (GoodAI & AI Roadmap Institute). We focused on scoping the problem, defining relevant actors, and visualizing possible scenarios of the AI race.

This workshop is the first in a series held by the AI Roadmap Institute in preparation for the AI Race Avoidance round of the General AI Challenge (described at the bottom of this page and planned to launch in late 2017). Posing the AI race avoidance problem as a worldwide challenge is a way to encourage the community to focus on solving this problem, explore this issue further and ignite interest in AI safety research.

By publishing the outcomes of this and the future workshops, and launching the challenge focused on AI race avoidance, we would like to promote AI safety research beyond the boundaries of the small AI safety community.

The issue should be subject to a wider public discourse, and should benefit from cross-disciplinary work of behavioral economists, psychologists, sociologists, policy makers, game theorists, security experts, and many more. We believe that transparency is essential part of solving many of the world’s direst problems and the AI race is no exception. This in turn may reduce regulation over-shooting and unreasonable political control that could hinder AI research.

Proposed line of thinking about the AI race: Example Roadmaps

One approach for starting to tackle the issue of AI race avoidance, and laying down the foundations for thorough discussion, is the creation of concrete roadmaps that outline possible scenarios of the future. Scenarios can be then compared, and mitigation strategies for negative futures can be suggested.

We used two simple methodologies for creating example roadmaps:

Methodology 1: a simple linear development of affairs is depicted by various shapes and colors representing the following notions: state of affairs, key actor, action, risk factor. The notions are grouped around each state of affairs in order to illustrate principal relevant actors, actions and risk factors.

Figure 1: This example roadmap depicts the safety issue before an AGI is developed. It is meant to be read top-down: arrow connecting ‘state-of-affairs’ depicts time. Entities representing actors, actions and factors, are placed around the time arrow, closest to states of affairs that they influence the most [full-size].

Methodology 2: each node in a roadmap represents a state, and each link, or transition, represents a decision-driven action by one of the main actors (such as a company/AI developer, government, rogue actor, etc.)

Figure 2: The example roadmap above visualises various scenarios from the point when the very first hypothetical company (C1) develops an AGI system. The roadmap primarily focuses on the dilemmas of C1. Furthermore, the roadmap visualises possible decisions made by key identified actors in various States of affairs, in order to depict potential roads to various outcomes. Traffic-light color coding is used to visualize the potential outcomes. Our aim was not to present all the possible scenarios, but a few vivid examples out of the vast spectrum of probable futures [full-size].

During the workshop, a number of important issues were raised. For example, the need to distinguish different time-scales for which roadmaps can be created, and different viewpoints (good/bad scenario, different actor viewpoints, etc.)

Timescale issue

Roadmapping is frequently a subjective endeavor and hence multiple approaches towards building roadmaps exist. One of the first issues that was encountered during the workshop was with respect to time variance. A roadmap created with near-term milestones in mind will be significantly different from long-term roadmaps, nevertheless both timelines are interdependent. Rather than taking an explicit view on short-/long-term roadmaps, it might be beneficial considering these probabilistically. For example, what roadmap can be built, if there was a 25% chance of general AI being developed within the next 15 years and 75% chance of achieving this goal in 15–400 years?

Considering the AI race at different temporal scales is likely to bring about different aspects which should be focused on. For instance, each actor might anticipate different speed of reaching the first general AI system. This can have a significant impact on the creation of a roadmap and needs to be incorporated in a meaningful and robust way. For example, the Boy Who Cried Wolf situation can decrease the established trust between actors and weaken ties between developers, safety researchers, and investors. This in turn could result in the decrease of belief in developing the first general AI system at the appropriate time. For example, a low belief of fast AGI arrival could result in miscalculating the risks of unsafe AGI deployment by a rogue actor.

Furthermore, two apparent time “chunks” have been identified that also result in significantly different problems that need to be solved. Pre- and Post-AGI era, i.e. before the first general AI is developed, compared to the scenario after someone is in possession of such a technology.

In the workshop, the discussion focused primarily on the pre-AGI era as the AI race avoidance should be a preventative, rather than a curative effort. The first example roadmap (figure 1) presented here covers the pre-AGI era, while the second roadmap (figure 2), created by GoodAI prior to the workshop, focuses on the time around AGI creation.

Viewpoint issue

We have identified an extensive (but not exhaustive) list of actors that might participate in the AI race, actions taken by them and by others, as well as the environment in which the race takes place, and states in between which the entire process transitions. Table 1 outlines the identified constituents. Roadmapping the same problem from various viewpoints can help reveal new scenarios and risks.

original document

Modelling and investigating decision dilemmas of various actors frequently led to the fact that cooperation could proliferate applications of AI safety measures and lessen the severity of race dynamics.

Cooperation issue

Cooperation among the many actors, and spirit of trust and cooperation in general, is likely to decrease the race dynamics in the overall system. Starting with a low-stake cooperation among different actors, such as talent co-development or collaboration among safety researchers and industry, should allow for incremental building of trust and better understanding of faced issues.

Active cooperation between safety experts and AI industry leaders, including cooperation between different AI developing companies on the questions of AI safety, for example, is likely to result in closer ties and in a positive information propagation up the chain, leading all the way to regulatory levels. Hands-on approach to safety research with working prototypes is likely to yield better results than theoretical-only argumentation.

One area that needs further investigation in this regard are forms of cooperation that might seem intuitive, but might rather reduce the safety of AI development [1].

Finding incentives to avoid the AI race

It is natural that any sensible developer would want to prevent their AI system from causing harm to its creator and humanity, whether it is a narrow AI or a general AI system. In case of a malignant actor, there is presumably a motivation at least not to harm themselves.

When considering various incentives for safety-focused development, we need to find a robust incentive (or a combination of such) that would push even unknown actors towards beneficial A(G)I, or at least an A(G)I that can be controlled [6].

Tying timescale and cooperation issues together

In order to prevent a negative scenario from happening, it should be beneficial to tie the different time-horizons (anticipated speed of AGI’s arrival) and cooperation together. Concrete problems in AI safety (interpretability, bias-avoidance, etc.) [7] are examples of practically relevant issues that need to be dealt with immediately and collectively. At the same time, the very same issues are related to the presumably longer horizon of AGI development. Pointing out such concerns can promote AI safety cooperation between various developers irrespective of their predicted horizon of AGI creation.

Forms of cooperation that maximize AI safety practice

Encouraging the AI community to discuss and attempt to solve issues such as AI race is necessary, however it might not be sufficient. We need to find better and stronger incentives to involve actors from a wider spectrum that go beyond actors traditionally associated with developing AI systems. Cooperation can be fostered through many scenarios, such as:

  • AI safety research is done openly and transparently,
  • Access to safety research is free and anonymous: anyone can be assisted and can draw upon the knowledge base without the need to disclose themselves or what they are working on, and without fear of losing a competitive edge (a kind of “AI safety helpline”),
  • Alliances are inclusive towards new members,
  • New members are allowed and encouraged to enter global cooperation programs and alliances gradually, which should foster robust trust building and minimize burden on all parties involved. An example of gradual inclusion in an alliance or a cooperation program is to start cooperating on low-stake issues from economic competition point of view, as noted above.

Closing remarks — continuing the work on AI race avoidance

In this post we have outlined our first steps on tackling the AI race. We welcome you to join in the discussion and help us to gradually come up with ways how to minimize the danger of converging to a state in which this could be an issue.

The AI Roadmap Institute will continue to work on AI race roadmapping, identifying further actors, recognizing yet unseen perspectives, time scales and horizons, and searching for risk mitigation scenarios. We will continue to organize workshops to discuss these ideas and publish roadmaps that we create. Eventually we will help build and launch the AI Race Avoidance round of the General AI Challenge. Our aim is to engage the wider research community and to provide it with a sound background to maximize the possibility of solving this difficult problem.

Stay tuned, or even better, join in now.

About the General AI Challenge and its AI Race Avoidance round
The General AI Challenge (Challenge for short) is a citizen science project organized by general artificial intelligence R&D company GoodAI. GoodAI provided a $5mil fund to be given out in prizes throughout various rounds of the multi-year Challenge. The goal of the Challenge is to incentivize talent to tackle crucial research problems in human-level AI development and to speed up the search for safe and beneficial general artificial intelligence.
The independent AI Roadmap Institute, founded by GoodAI, collaborates with a number of other organizations and researchers on various A(G)I related issues including AI safety. The Institute’s mission is to accelerate the search for safe human-level artificial intelligence by encouraging, studying, mapping and comparing roadmaps towards this goal. The AI Roadmap Institute is currently helping to define the second round of the Challenge, AI Race Avoidance, dealing with the question of AI race avoidance (set to launch in late 2017).
Participants of the second round of the Challenge will deliver analyses and/or solutions to the problem of AI race avoidance. Their submissions will be evaluated in a two-phase evaluation process: through a) expert acceptance and b) business acceptance. The winning submissions will receive monetary prizes, provided by GoodAI.
Expert acceptance
The expert jury prize will be awarded for an idea, concept, feasibility study, or preferably an actionable strategy that shows the most promise for ensuring safe development and avoiding rushed deployment of potentially unsafe A(G)I as a result of market and competition pressure.
Business acceptance
Industry leaders will be invited to evaluate top 10 submissions from the expert jury prize and possibly a few more submissions of their choice (these may include proposals which might have a potential for a significant breakthrough while lacking in feasibility criteria)
The business acceptance prize is a way to contribute to establishing a balance between the research and the business communities.
The proposals will be treated under an open licence and will be made public together with the names of their authors. Even in the absence of a “perfect” solution, the goal of this round of the General AI Challenge should be fulfilled by advancing the work on this topic and promoting interest in AI safety across disciplines.

References

[1] Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI & SOCIETY, 31(2), 201–206.

[2] Baum, S. D. (2016). On the promotion of safe and socially beneficial artificial intelligence. AI and Society, (2011), 1–9.

[3] Bostrom, N. (2017). Strategic Implications of Openness in AI Development. Global Policy, 8(2), 135–148.

[4] Geist, E. M. (2016). It’s already too late to stop the AI arms race — We must manage it instead. Bulletin of the Atomic Scientists, 72(5), 318–321.

[5] Conn, A. (2017). Can AI Remain Safe as Companies Race to Develop It?

[6] Orseau, L., & Armstrong, S. (2016). Safely Interruptible Agents.

[7] Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete Problems in AI Safety.


AVOIDING THE PRECIPICE was originally published in AI Roadmap Institute Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Gregory Falco: Protecting urban infrastructure against cyberterrorism

“The concept of my startup is, ‘Let’s use hacker tools to defeat hackers,’” PhD student Gregory Falco says. “If you don’t know how to break it, you don’t know how to fix it.”
Photo: Ian MacLellan

by Dara Farhadi

While working for the global management consulting company Accenture, Gregory Falco discovered just how vulnerable the technologies underlying smart cities and the “internet of things” — everyday devices that are connected to the internet or a network — are to cyberterrorism attacks.

“What happened was, I was telling sheiks and government officials all around the world about how amazing the internet of things is and how it’s going to solve all their problems and solve sustainability issues and social problems,” Falco says. “And then they asked me, ‘Is it secure?’ I looked at the security guys and they said, ‘There’s no problem.’ And then I looked under the hood myself, and there was nothing going on there.”

Falco is currently transitioning into the third and final year of his PhD within the Department of Urban Studies and Planning (DUSP). Currently, his is carrying out his research at the Computer Science and Artificial Intelligence Laboratory (CSAIL). His focus is on cybersecurity for urban critical infrastructure, and the internet of things, or IoT, is at the center of his work. A washing machine, for example, that is connected to an app on its owner’s smartphone is considered part of the IoT. There are billions of IoT devices that don’t have traditional security software because they’re built with small amounts of memory and low-power processors. This makes these devices susceptible to cyberattacks and may provide a gate for hackers to breach other devices on the same network.

Falco’s concentration is on industrial controls and embedded systems such as automatic switches found in subway systems.

“If someone decides to figure out how to access a switch by hacking another access point that is communicating with that switch, then that subway is not going to stop, and people are going to die,” Falco says. “We rely on these systems for our life functions — critical infrastructure like electric grids, water grids, or transportation systems, but also our health care systems. Insulin pumps, for example, are now connected to your smartphone.”

Citing real-world examples, Falco notes that Russian hackers were able to take down the Ukrainian capital city’s electric grid, and that Iranian hackers interfered with the computer-guided controls of a small dam in Rye Brook, New York.

Falco aims to help combat potential cyberattacks through his research. One arm of his dissertation, which he is working on with renown negotiation Professor Lawrence Susskind, is aimed at conflict negotiation, and looks at how best to negotiate with cyberterrorists. Also, with CSAIL Principal Research Scientist Howard Shrobe, Falco seeks to determine the possibility of predicting which control-systems vulnerabilities could be exploited in critical urban infrastructure. The final branch of Falco’s dissertation is in collaboration with NASA’s Jet Propulsion Laboratory. He has secured a contract to develop an artificial intelligence-powered automated attack generator that can identify all the possible ways someone could hack and destroy NASA’s systems.

“What I really intend to do for my PhD is something that is actionable to the communities I’m working with,” Falco says. “I don’t want to publish something in a book that will sit on a shelf where nobody would read it.”

“Not science fiction anymore”

Falco’s battle against cyberterrorism has also lead him to co-found NeuroMesh, a startup dedicated to protecting IoT devices by using the same techniques hackers use.

“The concept of my startup is, ‘Let’s use hacker tools to defeat hackers,’” Falco says. “If you don’t know how to break it, you don’t know how to fix it.”

One tool hackers use is called a botnet. Once botnets get on a device, they often kill off other malware on the device so that they use all the processing power on the device for themselves. Botnets also play “king of the hill” on the device, and don’t let other botnets latch on.

NeuroMesh uses a botnet’s features against itself to create a good botnet. By re-engineering the botnet, programmers can use them to defeat any kind of malware that comes onto a device.

“The benefit is also that when you look at securing IoT devices with low memory and low processing power, it’s impossible to put any security on them, but these botnets have no problem getting on there because they are so small,” Falco says.

Much like a vaccine protects against diseases, NeuroMesh applies a cyber vaccine to protect industrial devices from cyberattacks. And, by leveraging the bitcoin blockchain to update devices, NeuroMesh further fortifies the security system to block other malware from attacking vital IoT devices.

Recently, Falco and his team pitched their botnet vaccine at MIT’s $100K Accelerate competition and placed second. Falco’s infant son was in the audience while Falco was presenting how NeuroMesh’s technology could secure a baby monitor, as an example, from being hacked. The startup advanced to MIT’s prestigious 100K Launch startup competition, where they finished among the top eight competitors. NeuroMesh is now further developing its technology with the help of a grant from the Department of Energy, working with Stuart Madnick, who is the John Norris Maguire Professor at MIT, and Michael Siegel, a principal research scientist at MIT’s Sloan School of Management.

“Enemies are here. They are on our turf and in our wires. It’s not science fiction anymore,” Falco says. “We’re protecting against this. That’s what NeuroMesh is meant to do.”  

The human tornado

Falco’s abundant energy has led his family to call him “the tornado.”

“One-fourth of my mind is on my startup, one-fourth on finishing my dissertation, and other half is on my 11-month-old because he comes with me when my wife works,” Falco says. “He comes to all our venture capital meetings and my presentations. He’s always around and he’s generally very good.”

As a high school student, Falco’s energy and excitement for engineering drove him to discover a new physics wave theory. Applying this to the tennis racket, he invented a new, control-enhanced method of stringing, with which he won various science competitions (and tennis matches). He used this knowledge to start a small business for stringing rackets. The thrill of business took him on a path to Cornell University’s School of Hotel Administration. After graduating early, Falco transitioned into the field of sustainability technology and energy systems, and returned to his engineering roots by earning his LEED AP (Leadership in Energy and Environmental Design) accreditation and a master’s degree in sustainability management from Columbia University.

His excitement followed him to Accenture, where he founded the smart cities division and eventually learned about the vulnerability of IoT devices. For the past three years, Falco has also been sharing his newfound knowledge about sustainability and computer science as an adjunct professor at Columbia University.

“My challenge is always to find these interdisciplinary holes because my background is so messed up. You can’t say, this guy is a computer scientist or he’s a business person or an environmental scientist because I’m all over the place,” he says.

That’s part of the reason why Falco enjoys taking care of his son, Milo, so much.

“He’s the most awesome thing ever. I see him learning and it’s really amazing,” Falco says. “Spending so much time with him is very fun. He does things that my wife gets frustrated at because he’s a ball of energy and all over the place — just like me.”

Isaac Asimov’s 3 laws of AI – updated

In an OpEd piece in the NY Times, and in a TED Talk late last year, Oren Etzioni, PhD, author, and CEO of the Allen Institute for Artificial Intelligence, suggested an update for Isaac Asimov’s three laws of Artificial Intelligence. Given the widespread media attention emanating from Elon Musk’s (and others) warnings, these updates might be worth reviewing.

The Warnings

In an open letter to the U.N., a group of specialists from 26 nations and led by Elon Musk called for the United Nations to ban the development and use of autonomous weapons. The signatories included Musk and DeepMind co-founder Mustafa Suleyman, as well as 100+ other leaders in robotics and artificial-intelligence companies. They write that AI technology has reached a point where the deployment of such systems in the form of autonomous weapons is feasible within years, not decades, and many in the defense industry are saying that autonomous weapons will be the third revolution in warfare, after gunpowder and nuclear arms.

Another more political warning was recently broadcast on VoA: Russian President Vladimir Putin, speaking to a group of Russian students, called artificial intelligence “not only Russia’s future but the future of the whole of mankind… The one who becomes the leader in this sphere will be the ruler of the world. There are colossal opportunities and threats that are difficult to predict now.”

Asimov’s Three Rules

Isaac Asimov wrote “Runaround” in 1942 in which there was a government Handbook of Robotics (in 2058) which included the following three rules: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Etzioni’s Updated Rules

Etzioni has updated those three rules in his NY Times op-ed piece to:

  1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
  2. An A.I. system must clearly disclose that it is not human.
  3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.

Etzioni offered these updates to begin a discussion that would lead to a non-fictional Handbook of Robotics by the United Nations — and sooner than the 2058 sci-fi date. One that would regulate but not thwart the already growing global AI business.

And growing it is!

China’s Artificial Intelligence Manifesto

China has recently announced their long-term goal to become #1 in A.I. by 2030. They plan to grow their A.I. industry to over $22 billion by 2020, $59 billion by 2025 and $150 billion by 2030. They did this same type of long-term strategic planning for robotics – to make it an in-country industry and to transform the country from a low-cost labor source to a high-tech manufacturing resource… and it’s working.

With this major strategic long-term AI push, China is looking to rival U.S. market leaders such as Alphabet/Google, Apple, Amazon, IBM and Microsoft. China is keen not to be left behind in a technology that is increasingly pivotal — from online commerce to self-driving vehicles to energy to consumer products. China aims to catch up by solving issues including a lack of high-end computer chips, software that writes software, and trained personnel. Beijing will play a big role in policy support and regulation as well as providing and funding research, incentives and tax credits.

Premature or not, the time is now

Many in AI and robotics feel that the present state of development in AI, including improvements in machine and deep learning methods, is primitive and decades away from independent thinking. Siri and Alexa, as fun and capable as they are, are still programmed by humans and cannot even initiate a conversation or truly understand its content. Nevertheless, there is a reason why people have expressed that they sense what may be possible in the future when artificial intelligence decides what ‘it’ thinks is best for us. Consequently, global regulation can’t hurt.

Micro drones swarm above Metallica

Metallica’s European WorldWired tour, which opened to an ecstatic crowd of 15,000 in Copenhagen’s sold-out Royal Arena this Saturday, features a swarm of micro drones flying above the band. Shortly after the band breaks into their hit single “Moth Into Flame”, dozens of micro drones start emerging from the stage, forming a large rotating circle above the stage. As the music builds, more and more drones emerge and join the formation, creating increasingly complex patterns, culminating in a choreography of three interlocking rings that rotate in position.

This show’s debut marks the world’s first autonomous drone swarm performance in a major touring act. Unlike previous drone shows, this performance features indoor drones, flying above performers and right next to throngs of concert viewers in a live event setting. Flying immediately next to audiences creates a more intimate effect than outdoor drone shows. The same closeness also allows the creation of moving, three-dimensional sculptures like the ones seen in the video — an effect further enhanced by Metallica’s 360-degree stage setup, with concert viewers on all sides.

Flying drones close to and around people in such a setting is challenging. Unlike outdoors, indoor drones cannot rely on GPS signals, which are severely degraded in indoor settings and do not offer the required accuracy for autonomous drone navigation on stage. The safety aspects of flying dozens of drones close to crowds in the high-pressure, live-event environment impose further challenges. Robustness to the uncertainties caused by changing show conditions in a touring setting as well as variation in the drone systems’ components and sensors, including the hundreds of motors powering the drones, is another necessary condition for this drone show system.

“It’s all about safety and reliability first”, says Raffaello D’Andrea, founder of the company behind the drones used in the Metallica show, Verity Studios (full disclosure: I’m a co-founder). D’Andrea knows what he is talking about: In work with his previous company, which was snatched up by e-commerce giant Amazon for an eye-watering 775M USD in 2012, D’Andrea and his team created fleets of autonomous warehousing robots, moving inventory through the warehouse around the clock. That company, which has since been renamed Amazon Robotics, now operates up to 10,000 robots — in a single warehouse.

How was this achieved?

In a nutshell: Verity Studios’ drone show system is an advanced show automation system that uses distributed AI, robotics, and sophisticated algorithms to achieve the level of robust performance and safety required by the live entertainment industry. With a track record of >7,000 autonomous flights on Broadway, achieved with its larger Stage Flyer drones during 398 live shows, Verity Studios is no newcomer to this industry.

Many elements are needed to create a touring drone show; the drones themselves are just one aspect. Verity’s drones are autonomous, supervised by a human operator, who does not control drone motions individually. Instead, the operator only issues high-level commands such as “takeoff” or “land”, monitors the motions of multiple drones at a time, and reacts to anomalies. In other words, Verity’s advanced automation system takes over the role of multiple human pilots that would be required with standard, remote-controlled drones. The drones are flying mobile robots that navigate autonomously, piloting themselves, under human supervision. The autonomous drones’ motions and their lighting design are choreographed by Verity’s creative staff.

To navigate autonomously, drones require a reliable method for determining their position in space. As mentioned above, while drones can use GPS for their autonomous navigation in an outdoor setting, GPS is not a viable option indoors: GPS signals degrade close to large structures (e.g., tall buildings) and are usually not available, or severely degraded, in indoor environments. Since degraded GPS may result in unreliable or unsafe conditions for autonomous flight, the Verity drones use proprietary indoor localization technology.

System architecture of Verity Studios’ drone show system used in Metallica’s WorldWired tour, comprising positioning modules part of Verity’s indoor localization technology, autonomous drones, and an operator control station.

It is the combination of a reliable indoor positioning system with intelligent autonomous drones and a suitable operator interface that allows the single operator of the Metallica show to simultaneously control the coordinated movement of many drones. This pilot-less approach is not merely a matter of increasing efficiency and effectiveness (who wants to have dozens of pilots on staff), but also a key safety requirement: Pilot errors have been an important contributing factor in dozens of documented drone accidents at live events. Safety risks rapidly increase as the number of drones increases, resulting in more complex flight plans and higher risks of mid-air collisions. Autonomous control allows safer operation of multiple drones than remote control by human pilots, especially when operating in a reduced airspace envelope.

Verity’s system also had to be engineered for safety in spite of other potential failures, including wireless interference, hardware or software component failures, power outages, or malicious disruption/hacking attacks. In its 398-show run on Broadway, the biggest challenge to safety turned out to be another factor: Human error. While operated by theater staff on Broadway, Verity’s system correctly identified human errors on five occasions and prevented the concerned drones from taking flight (on these occasions, the show continued with six or seven instead of the show’s planned eight drones; only one show proceeded without any drones as a safety precaution, i.e., the drone show’s “uptime” was 99.7%). As my colleagues and I have outlined in a recently published overview document on best practices for drone shows, when using drones at live event safety is a hard requirement.

Another key element for Verity’s show creation process are drone authoring tools. Planning shows like the Metallica performance requires tools for the efficient creation of trajectories for large numbers of drones. The trajectories must account for the drones’ actual flight dynamics, considering actuator limitations, as well as for aerodynamic effects, such as air turbulence or lift. Drone motions generated by these tools need to be collision-free and allow for emergency maneuvers. To create compelling effects, drone authoring tools also need to allow extracting all of the dynamic performance the drones are capable of — another area that D’Andrea’s team has gained considerable experience with prior to founding Verity Studios, in this case as part of research at the Swiss Federal Institute of Technology’s Flying Machine Arena.

Creating a compelling drone show requires more than the drone show system itself. For this tour, Verity Studios partnered with the world’s leading stage automation company TAIT Towers to integrate the drones into the stage floor as well as tackling a series of other technical challenges related to this touring show.

While technology is the key enabler, the starting point and the key driver of Verity’s shows are non-technological. Instead, the show is driven by the show designers’ creative intent. This comprises defining the role of show drones for the performance at hand as well as determining their integration into the visual and musical motifs of the show’s creative concept. For Metallica, the drones’ flight trajectories and lighting were created by Verity’s choreography team, incorporating feedback from Metallica’s production team and the band.

Metallica’s WorldWired tour
Metallica’s WorldWired Tour is their first worldwide tour after the World Magnetic Tour six years ago. The tour’s currently published European leg runs until 11 May 2018, with all general tickets sold out.

Further Robohub reading

Some images for your viewing pleasure

James Hetfield with Verity’s drones
Verity’s drones swarming below TAIT’s LED cubes
A glimpse at the 15,000-strong audience of the sold-out concert

Robotic system monitors specific neurons

MIT engineers have devised a way to automate the process of monitoring neurons in a living brain using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell. In this image, a pipette guided by a robotic arm approaches a neuron identified with a fluorescent stain.
Credit: Ho-Jun Suk

by Anne Trafton

Recording electrical signals from inside a neuron in the living brain can reveal a great deal of information about that neuron’s function and how it coordinates with other cells in the brain. However, performing this kind of recording is extremely difficult, so only a handful of neuroscience labs around the world do it.

To make this technique more widely available, MIT engineers have now devised a way to automate the process, using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell.

This technology could allow more scientists to study single neurons and learn how they interact with other cells to enable cognition, sensory perception, and other brain functions. Researchers could also use it to learn more about how neural circuits are affected by brain disorders.

“Knowing how neurons communicate is fundamental to basic and clinical neuroscience. Our hope is this technology will allow you to look at what’s happening inside a cell, in terms of neural computation, or in a disease state,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Boyden is the senior author of the paper, which appears in the Aug. 30 issue of Neuron. The paper’s lead author is MIT graduate student Ho-Jun Suk.

Precision guidance

For more than 30 years, neuroscientists have been using a technique known as patch clamping to record the electrical activity of cells. This method, which involves bringing a tiny, hollow glass pipette in contact with the cell membrane of a neuron, then opening up a small pore in the membrane, usually takes a graduate student or postdoc several months to learn. Learning to perform this on neurons in the living mammalian brain is even more difficult.

There are two types of patch clamping: a “blind” (not image-guided) method, which is limited because researchers cannot see where the cells are and can only record from whatever cell the pipette encounters first, and an image-guided version that allows a specific cell to be targeted.

Five years ago, Boyden and colleagues at MIT and Georgia Tech, including co-author Craig Forest, devised a way to automate the blind version of patch clamping. They created a computer algorithm that could guide the pipette to a cell based on measurements of a property called electrical impedance — which reflects how difficult it is for electricity to flow out of the pipette. If there are no cells around, electricity flows and impedance is low. When the tip hits a cell, electricity can’t flow as well and impedance goes up.

Once the pipette detects a cell, it can stop moving instantly, preventing it from poking through the membrane. A vacuum pump then applies suction to form a seal with the cell’s membrane. Then, the electrode can break through the membrane to record the cell’s internal electrical activity.

The researchers achieved very high accuracy using this technique, but it still could not be used to target a specific cell. For most studies, neuroscientists have a particular cell type they would like to learn about, Boyden says.

“It might be a cell that is compromised in autism, or is altered in schizophrenia, or a cell that is active when a memory is stored. That’s the cell that you want to know about,” he says. “You don’t want to patch a thousand cells until you find the one that is interesting.”

To enable this kind of precise targeting, the researchers set out to automate image-guided patch clamping. This technique is difficult to perform manually because, although the scientist can see the target neuron and the pipette through a microscope, he or she must compensate for the fact that nearby cells will move as the pipette enters the brain.

“It’s almost like trying to hit a moving target inside the brain, which is a delicate tissue,” Suk says. “For machines it’s easier because they can keep track of where the cell is, they can automatically move the focus of the microscope, and they can automatically move the pipette.”

By combining several imaging processing techniques, the researchers came up with an algorithm that guides the pipette to within about 25 microns of the target cell. At that point, the system begins to rely on a combination of imagery and impedance, which is more accurate at detecting contact between the pipette and the target cell than either signal alone.

The researchers imaged the cells with two-photon microscopy, a commonly used technique that uses a pulsed laser to send infrared light into the brain, lighting up cells that have been engineered to express a fluorescent protein.

Using this automated approach, the researchers were able to successfully target and record from two types of cells — a class of interneurons, which relay messages between other neurons, and a set of excitatory neurons known as pyramidal cells. They achieved a success rate of about 20 percent, which is comparable to the performance of highly trained scientists performing the process manually.

Unraveling circuits

This technology paves the way for in-depth studies of the behavior of specific neurons, which could shed light on both their normal functions and how they go awry in diseases such as Alzheimer’s or schizophrenia. For example, the interneurons that the researchers studied in this paper have been previously linked with Alzheimer’s. In a recent study of mice, led by Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and conducted in collaboration with Boyden, it was reported that inducing a specific frequency of brain wave oscillation in interneurons in the hippocampus could help to clear amyloid plaques similar to those found in Alzheimer’s patients.

“You really would love to know what’s happening in those cells,” Boyden says. “Are they signaling to specific downstream cells, which then contribute to the therapeutic result? The brain is a circuit, and to understand how a circuit works, you have to be able to monitor the components of the circuit while they are in action.”

This technique could also enable studies of fundamental questions in neuroscience, such as how individual neurons interact with each other as the brain makes a decision or recalls a memory.

Bernardo Sabatini, a professor of neurobiology at Harvard Medical School, says he is interested in adapting this technique to use in his lab, where students spend a great deal of time recording electrical activity from neurons growing in a lab dish.

“It’s silly to have amazingly intelligent students doing tedious tasks that could be done by robots,” says Sabatini, who was not involved in this study. “I would be happy to have robots do more of the experimentation so we can focus on the design and interpretation of the experiments.”

To help other labs adopt the new technology, the researchers plan to put the details of their approach on their web site, autopatcher.org.

Other co-authors include Ingrid van Welie, Suhasa Kodandaramaiah, and Brian Allen. The research was funded by Jeremy and Joyce Wertheimer, the National Institutes of Health (including the NIH Single Cell Initiative and the NIH Director’s Pioneer Award), the HHMI-Simons Faculty Scholars Program, and the New York Stem Cell Foundation-Robertson Award.

Robots Podcast #242: CUJO – Smart Firewall for Cybersecurity, with Leon Kuperman



In this episode, MeiXing Dong talks with Leon Kuperman, CTO of CUJO, about cybersecurity threats and how to guard against them. They discuss how CUJO, a smart hardware firewall, helps protect the home against online threats.

Leon Kuperman

Leon Kuperman is the CTO of CUJO IoT Security. He co-founded ZENEDGE, an enterprise web application security platform, and Truition Inc. He is also the CTO of BIDZ.com.

 

 

 

 

 

 

Links

Udacity Robotics video series: Interview with Cory Kidd from Catalia Health


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Cory Kidd. Dr. Kidd is focused on innovating within the rapidly changing healthcare technology market. He is the founder and CEO of Catalia Health, a company that delivers patient engagement across a variety of chronic conditions.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

August 2017 fundings, acquisitions, IPOs and failures


August fundings totaled $369 million but the number of August transactions, seven, was down from previous months, eg: both July and June had 19 fundings each. Acquisitions, on the other hand, remained steady with a big one pending: Snap has been negotiating all month to acquire Chinese drone startup Zero Zero Robotics for around $150M.

Fundings

  1. Auris Medical Robotics, the Silicon Valley startup headed by Dr. Frederic H. Moll who previously co-founded Hansen Medical and Intuitive Surgical, raised $280 million in a Series D round led by Coatue Management and included earlier investors Mithril Capital Management, Lux Capital, and Highland Capital. Auris has raised a total of $530 million and is developing targeted, minimally invasive robotic-assisted therapies that treat only the diseased cells in order to prevent the progression of a patient’s illness. Lung cancer is the first disease they are targeting.
  2. Oryx Vision, an Israeli startup, raised $50 million in a round led by Third Point Ventures and WRV with participation by Union Tech Ventures. They all join existing investors Bessemer Venture Partners, Maniv Mobility, and Trucks VC, a VC firm focused on the future of transportation. The company has raised a total of $67 million to date. Oryx is developing a LiDAR for self-driving automobiles using microscopic antennas to detect the light frequencies. The tiny antennas are made of silicon which allows them to put thousands in one sensor thereby lowering the cost of LiDAR distancing. The advantage is increased range and sensitivity for an autonomous vehicle that needs to know exactly what is surrounding it and what those things are doing and can see through fog and not get blinded by bright sunlight.
  3. TuSimple, a Chinese startup developing driverless technologies for the trucking industry, raised $20 million in a Series B funding round led by Nvidia with participation by Sina. Nvidia will own a 3% stake in TuSimple while the startup will support the development of the Nvidia’s artificial intelligence computing platform for self-driving vehicles, Drive PX2.
  4. Atlas Dynamics, a Latvian/Israeli drone startup, raised $8 million from investment groups in Israel and in Asia. The 3-rotor Atlas Pro drone operates autonomously with interchangeable payloads and offers 55 minutes of flight time.
  5. Common Sense Robotics, an Israeli warehouse fulfillment robotics startup, raised $6 million from Aleph VC and Innovation Endeavors. CommonSense is developing small urban, automated spaces that combine the benefits of local distribution with the economics of automated fulfillment. In big cities these ‘micro-centers’ would receive, stock, and package merchandise of participating vendors based on predictive algorithms. Vendors would then arrange last-mile delivery solutions.
  6. Sky-Futures, a London-based industrial inspection services with drones startup, raised $4 million in funding from Japanese giant Mitsui & Co. The announcement came as part of Theresa May’s just-concluded trip to Japan. Sky Futures and Mitsui plan to provide inspections and other services to Mitsui’s clients across a range of sectors. Mitsui, a trading, investment and service company, has 139 offices in 66 countries.
  7. Ambient Intelligence Technology, a Japanese underwater drone manufacturer spin-off from the University of Tsukuba, raised $1.93 million from Beyond Next Ventures and Mitsui Sumitomo Insurance Venture Capital, SMBC Venture Capital, and Freebit Investment. Ambient’s ROVs can operate for prolonged periods of autonomous operation at depths of 300 meters.

Acquisitions

  1. Dupont Pioneer has acquired farm management software platform startup Granular for $300 million. San Francisco-based Granular’s farm management software helps farmers run more profitable businesses by enabling them to manage their operations and analyze their financials for each of their fields in real time and to create reports for third parties like landowners and banks. Last year they partnered with the American Farm Bureau Insurance Services to streamline crop insurance data collection and reporting and also have a cross-marketing arrangement with Deere.
  2. L3 Technologies acquired Massachusetts-based OceanServer Technology for an undisclosed amount. “OceanServer Technology positions L3 to support the U.S. Navy’s vision for the tactical employment of UUVs. This acquisition also enhances our technological capabilities and strengthens our position in growth areas where we see compelling opportunity,” said Michael T. Strianese, L3’s Chairman and CEO. “As a leading innovator and developer of UUVs, OceanServer Technology provides L3 with a new growth platform that is aligned with the U.S. Navy’s priorities.”
  3. KB Medical, SA, a Swiss medical robotics startup, was acquired by Globus Medical, a musculoskeletal solutions manufacturer, for an undisclosed amount. This is the 2nd acquisition of a robotics startup by Globus. They acquired Excelsius Robotics in 2014. “The addition of KB Medical will enable Globus Medical to accelerate, enhance and expand our product portfolio in imaging, navigation and robotics. KB Medical’s experienced team of technology development professionals, its strong IP portfolio, and shared philosophy for robotic solutions in medicine strengthen Globus Medical’s position in this strategic area,” said Dave Demski of Emerging Technologies.
  4. Jenoptik, a Germany-based laser components manufacturer of vision systems for automation and robotics, acquired Michigan-based Five Lakes Automation, an integrator and manufacturer of robotic material handling systems, for an undisclosed amount.
  5. Honeybee Robotics, the Brooklyn-based robotic space systems provider, was acquired by Ensign-Bickford for an undisclosed amount. Ensign-Bickford is a privately held 181-year-old contractor and supplier of space launch vehicles and systems. “The timing is great,” said Kiel Davis, President of Honeybee Robotics. “Honeybee has a range of new spacecraft motion control and robotics products coming to market. And EBI has the experience and resources to help us scale up and optimize our production operations so that we can meet the needs of our customers today and in the near future.”

IPOs

  1. Duke Robotics, a Florida and Israeli developer of advanced robotic systems that provide troops with aerial support and other technologies developed in Israel, has filed and been qualified for a stock offering of up to $15 million under SEC Tier II Reg A+ which allows anyone, not just wealthy investors, to be able to purchase stock from approved equity crowdfunding offers.

Failures

  1. C&R Robotics (KR)
  2. EZ-Robotics (CN)

Robots won’t steal our jobs if we put workers at center of AI revolution

File 20170830 24267 1w1z0fj

Future robots will work side by side with humans, just as they do today.
Credit: AP Photo/John Minchillo

by Thomas Kochan, MIT Sloan School of Management and Lee Dyer, Cornell University

The technologies driving artificial intelligence are expanding exponentially, leading many technology experts and futurists to predict machines will soon be doing many of the jobs that humans do today. Some even predict humans could lose control over their future.

While we agree about the seismic changes afoot, we don’t believe this is the right way to think about it. Approaching the challenge this way assumes society has to be passive about how tomorrow’s technologies are designed and implemented. The truth is there is no absolute law that determines the shape and consequences of innovation. We can all influence where it takes us.

Thus, the question society should be asking is: “How can we direct the development of future technologies so that robots complement rather than replace us?”

The Japanese have an apt phrase for this: “giving wisdom to the machines.” And the wisdom comes from workers and an integrated approach to technology design, as our research shows.

Lessons from history

There is no question coming technologies like AI will eliminate some jobs, as did those of the past.

The invention of the steam engine was supposed to reduce the number of manufacturing workers. Instead, their ranks soared.
Lewis Hine

More than half of the American workforce was involved in farming in the 1890s, back when it was a physically demanding, labor-intensive industry. Today, thanks to mechanization and the use of sophisticated data analytics to handle the operation of crops and cattle, fewer than 2 percent are in agriculture, yet their output is significantly higher.

But new technologies will also create new jobs. After steam engines replaced water wheels as the source of power in manufacturing in the 1800s, the sector expanded sevenfold, from 1.2 million jobs in 1830 to 8.3 million by 1910. Similarly, many feared that the ATM’s emergence in the early 1970s would replace bank tellers. Yet even though the machines are now ubiquitous, there are actually more tellers today doing a wider variety of customer service tasks.

So trying to predict whether a new wave of technologies will create more jobs than it will destroy is not worth the effort, and even the experts are split 50-50.

It’s particularly pointless given that perhaps fewer than 5 percent of current occupations are likely to disappear entirely in the next decade, according to a detailed study by McKinsey.

Instead, let’s focus on the changes they’ll make to how people work.

It’s about tasks, not jobs

To understand why, it’s helpful to think of a job as made up of a collection of tasks that can be carried out in different ways when supported by new technologies.

And in turn, the tasks performed by different workers – colleagues, managers and many others – can also be rearranged in ways that make the best use of technologies to get the work accomplished. Job design specialists call these “work systems.”

One of the McKinsey study’s key findings was that about a third of the tasks performed in 60 percent of today’s jobs are likely to be eliminated or altered significantly by coming technologies. In other words, the vast majority of our jobs will still be there, but what we do on a daily basis will change drastically.

To date, robotics and other digital technologies have had their biggest effects on mostly routine tasks like spell-checking and those that are dangerous, dirty or hard, such as lifting heavy tires onto a wheel on an assembly line. Advances in AI and machine learning will significantly expand the array of tasks and occupations affected.

Creating an integrated strategy

We have been exploring these issues for years as part of our ongoing discussions on how to remake labor for the 21st century. In our recently published book, “Shaping the Future of Work: A Handbook for Change and a New Social Contract,” we describe why society needs an integrated strategy to gain control over how future technologies will affect work.

And that strategy starts with helping define the problems humans want new technologies to solve. We shouldn’t be leaving this solely to their inventors.

Fortunately, some engineers and AI experts are recognizing that the end users of a new technology must have a central role in guiding its design to specify which problems they’re trying to solve.

The second step is ensuring that these technologies are designed alongside the work systems with which they will be paired. A so-called simultaneous design process produces better results for both the companies and their workers compared with a sequential strategy – typical today – which involves designing a technology and only later considering the impact on a workforce.

An excellent illustration of simultaneous design is how Toyota handled the introduction of robotics onto its assembly lines in the 1980s. Unlike rivals such as General Motors that followed a sequential strategy, the Japanese automaker redesigned its work systems at the same time, which allowed it to get the most out of the new technologies and its employees. Importantly, Toyota solicited ideas for improving operations directly from workers.

In doing so, Toyota achieved higher productivity and quality in its plants than competitors like GM that invested heavily in stand-alone automation before they began to alter work systems.

Similarly, businesses that tweaked their work systems in concert with investing in IT in the 1990s outperformed those that didn’t. And health care companies like Kaiser Permanente and others learned the same lesson as they introduced electronic medical records over the past decade.

Each example demonstrates that the introduction of a new technology does more than just eliminate jobs. If managed well, it can change how work is done in ways that can both increase productivity and the level of service by augmenting the tasks humans do.

Worker wisdom

But the process doesn’t end there. Companies need to invest in continuous training so their workers are ready to help influence, use and adapt to technological changes. That’s the third step in getting the most out of new technologies.

And it needs to begin before they are introduced. The important part of this is that workers need to learn what some are calling “hybrid” skills: a combination of technical knowledge of the new technology with aptitudes for communications and problem-solving.

Companies whose workers have these skills will have the best chance of getting the biggest return on their technology investments. It is not surprising that these hybrid skills are now in high and growing demand and command good salaries.

None of this is to deny that some jobs will be eliminated and some workers will be displaced. So the final element of an integrated strategy must be to help those displaced find new jobs and compensate those unable to do so for the losses endured. Ford and the United Auto Workers, for example, offered generous early retirement benefits and cash severance payments in addition to retraining assistance when the company downsized from 2007 to 2010.

Examples like this will need to become the norm in the years ahead. Failure to treat displaced workers equitably will only widen the gaps between winners and losers in the future economy that are now already all too apparent.

The ConversationIn sum, companies that engage their workforce when they design and implement new technologies will be best-positioned to manage the coming AI revolution. By respecting the fact that today’s workers, like those before them, understand their jobs better than anyone and the many tasks they entail, they will be better able to “give wisdom to the machines.”

Thomas Kochan, Professor of Management, MIT Sloan School of Management and Lee Dyer, Professor Emeritus of Human Resource Studies and Research Fellow, Center for Advanced Human Resource Studies (CAHRS), Cornell University

This article was originally published on The Conversation. Read the original article.

Robot learns to follow orders like Alexa

ComText allows robots to understand contextual commands such as, “Pick up the box I put down.”
Photo: Tom Buehler/MIT CSAIL

by Adam Conner-Simons & Rachel Gordon

Despite what you might see in movies, today’s robots are still very limited in what they can do. They can be great for many repetitive tasks, but their inability to understand the nuances of human language makes them mostly useless for more complicated requests.

For example, if you put a specific tool in a toolbox and ask a robot to “pick it up,” it would be completely lost. Picking it up means being able to see and identify objects, understand commands, recognize that the “it” in question is the tool you put down, go back in time to remember the moment when you put down the tool, and distinguish the tool you put down from other ones of similar shapes and sizes.

Recently researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have gotten closer to making this type of request easier: In a new paper, they present an Alexa-like system that allows robots to understand a wide range of commands that require contextual knowledge about objects and their environments. They’ve dubbed the system “ComText,” for “commands in context.”

The toolbox situation above was among the types of tasks that ComText can handle. If you tell the system that “the tool I put down is my tool,” it adds that fact to its knowledge base. You can then update the robot with more information about other objects and have it execute a range of tasks like picking up different sets of objects based on different commands.

“Where humans understand the world as a collection of objects and people and abstract concepts, machines view it as pixels, point-clouds, and 3-D maps generated from sensors,” says CSAIL postdoc Rohan Paul, one of the lead authors of the paper. “This semantic gap means that, for robots to understand what we want them to do, they need a much richer representation of what we do and say.”

The team tested ComText on Baxter, a two-armed humanoid robot developed for Rethink Robotics by former CSAIL director Rodney Brooks.

The project was co-led by research scientist Andrei Barbu, alongside research scientist Sue Felshin, senior research scientist Boris Katz, and Professor Nicholas Roy. They presented the paper at last week’s International Joint Conference on Artificial Intelligence (IJCAI) in Australia.

How it works

Things like dates, birthdays, and facts are forms of “declarative memory.” There are two kinds of declarative memory: semantic memory, which is based on general facts like the “sky is blue,” and episodic memory, which is based on personal facts, like remembering what happened at a party.

Most approaches to robot learning have focused only on semantic memory, which obviously leaves a big knowledge gap about events or facts that may be relevant context for future actions. ComText, meanwhile, can observe a range of visuals and natural language to glean “episodic memory” about an object’s size, shape, position, type and even if it belongs to somebody. From this knowledge base, it can then reason, infer meaning and respond to commands.

“The main contribution is this idea that robots should have different kinds of memory, just like people,” says Barbu. “We have the first mathematical formulation to address this issue, and we’re exploring how these two types of memory play and work off of each other.”

With ComText, Baxter was successful in executing the right command about 90 percent of the time. In the future, the team hopes to enable robots to understand more complicated information, such as multi-step commands, the intent of actions, and using properties about objects to interact with them more naturally.

For example, if you tell a robot that one box on a table has crackers, and one box has sugar, and then ask the robot to “pick up the snack,” the hope is that the robot could deduce that sugar is a raw material and therefore unlikely to be somebody’s “snack.”

By creating much less constrained interactions, this line of research could enable better communications for a range of robotic systems, from self-driving cars to household helpers.

“This work is a nice step towards building robots that can interact much more naturally with people,” says Luke Zettlemoyer, an associate professor of computer science at the University of Washington who was not involved in the research. “In particular, it will help robots better understand the names that are used to identify objects in the world, and interpret instructions that use those names to better do what users ask.”

The work was funded, in part, by the Toyota Research Institute, the National Science Foundation, the Robotics Collaborative Technology Alliance of the U.S. Army, and the Air Force Research Laboratory.

New soft robots really suck: Vacuum-powered systems empower diverse capabilities

V-SPA
Recent advances in soft robotics have seen the development of soft pneumatic actuators (SPAs) to ensure that all parts of the robot are soft, including the functional parts. These SPAs have traditionally used increased pressure in parts of the actuator to initiate movement, but today a team from NCCR Robotics and RRL, EPFL publish a new kind of SPA, one that uses vacuum, in ScienceRobotics.

The new vacuum-powered Soft Pneumatic Actuator (V-SPA) is soft, lightweight and very easy to fabricate. By using foam and coating it with layers of silicone-rubber, the team have created an actuator that can be made using off the shelf parts without the need for molds – in fact, it takes just two hours to manufacture the V-SPA.

Once produced, the actuators were combined into plug-and-play “V-SPA Modules” which created a simplified design of soft pneumatic robots with many degrees of freedom. In fact, the team created reconfigurable, modular robots using these modules, where every function of the robot was powered by a single shared vacuum source, enabling many different types of capabilities, such as multiple forms of ground locomotion, vertical climbing, object manipulation and stiffness changing.

To test the new modular robot, the team added a suction arm which used the vacuum pump to pick up and move a series of objects, a task that was completed by turning on suction when an object should be carried and allowing the arm to refill with air when the object should be released. Further validation came through attaching suction cups to the robot and using it to climb up a vertical window and using the robot to walk using a number of different gaits (either through use of waves, like a snake, or rolling).

By creating a soft, lightweight actuator that can move in any direction the team hope to enable a new generation of truly soft, compliant robots that can interact safely with the humans that use them.

 

 

Reference

M. A. Robertson and J. Paik, “New soft robots really suck: vacuum powered systems empower diverse capabilities,” Science Robotics. doi/10.1126/scirobotics.aan6357

Industrial robots in China up, up and away!

China has rapidly become a global leader in robotics and automation. 2016 annual sales of industrial robots reached the highest level ever for any single country: 87,000 units (up 27% from 2015) and China’s stock of industrial robots is now, at 340,000 units, also the highest total in the world. while Chinese robot manufacturers increased their market share to 31% (up 120% from 2015).

The International Federation of Robotics (IFR), which provided these figures, is forecasting that “from 2018 to 2020, a sales increase between 15 and 20 percent on average per year is possible for industrial robots.” And these projections don’t include service robots for professional and B2B use, and personal use such as toys, drones, mobile gofers, guides, home assistants, and consumer products like robotic vacuums and floor and window cleaners.

Outlook for 2017

According to a report released by China Robot Industry Alliance (CRIA) at the big World Robot Conference in August in Beijing and reported by China Daily, China’s industrial robot market is expected to reach $4.22 billion in 2017 representing more than 110,000 new industrial robots.

At the same press conference, CRIA also reported that China’s service robot market will reach $1.32 billion this year, up 28% percent from 2015.

Outlook to 2020

The main drivers for the growth of the use of industrial robots in China are the electrical and electronics industry followed by general handling, welding and the auto industry. This broad and expanding demand is expected to continue as major contract manufacturers start and/or continue to automate their production. A further driving factor is China’s growing consumer market for all kinds of consumer goods.

According to the ten-year national plan “Made in China 2025,” the Chinese government wants to transform China from a low-cost labor-intensive manufacturing giant into a technology-based world manufacturing power. The plan includes strengthening Chinese robot suppliers and further increasing their market shares in China and abroad.

Shanzhai

Shenzhen is the Silicon Valley of technology and hardware for China. Things get made FAST. All kinds of ‘things.’ The can-make attitude in Shenzhen is being duplicated around China thus it is important to know what goes on, why it happens in Shenzhen, why it happens so fast, and what they think about patents, intellectual property and Western companies.

Another factor (driver) in China’s relentless push toward automation and robotics is this factoid: In 2016, China’s mobile payments hit $5.5 trillion, roughly 50 times the size of America’s $112 billion market, according to consulting firm iResearch. Chinese are adopting cashless and e-commerce methods at a rate significantly faster than the rest of the world.

WIRED Video produced an hour-long documentary describing the process, the people, and ‘Shanzhai,’ the evolving philosophy of copycat manufacturing, and attempts to put a positive spin on patent avoidance and what many Westerners call stealing, plus the speed of production for adequate profits (as opposed to massive profits). It is a worthwhile and very informative investment of an hour of your time.

New robot rolls with the rules of pedestrian conduct


by Jennifer Chu
Engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.
Credit: MIT

Just as drivers observe the rules of the road, most pedestrians follow certain social codes when navigating a hallway or a crowded thoroughfare: Keep to the right, pass on the left, maintain a respectable berth, and be ready to weave or change course to avoid oncoming obstacles while keeping up a steady walking pace.

Now engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.

In drive tests performed inside MIT’s Stata Center, the robot, which resembles a knee-high kiosk on wheels, successfully avoided collisions while keeping up with the average flow of pedestrians. The researchers have detailed their robotic design in a paper that they will present at the IEEE Conference on Intelligent Robots and Systems in September.

“Socially aware navigation is a central capability for mobile robots operating in environments that require frequent interactions with pedestrians,” says Yu Fan “Steven” Chen, who led the work as a former MIT graduate student and is the lead author of the study. “For instance, small robots could operate on sidewalks for package and food delivery. Similarly, personal mobility devices could transport people in large, crowded spaces, such as shopping malls, airports, and hospitals.”

Chen’s co-authors are graduate student Michael Everett, former postdoc Miao Liu, and Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics at MIT.

Social drive

In order for a robot to make its way autonomously through a heavily trafficked environment, it must solve four main challenges: localization (knowing where it is in the world), perception (recognizing its surroundings), motion planning (identifying the optimal path to a given destination), and control (physically executing its desired path).

Chen and his colleagues used standard approaches to solve the problems of localization and perception. For the latter, they outfitted the robot with off-the-shelf sensors, such as webcams, a depth sensor, and a high-resolution lidar sensor. For the problem of localization, they used open-source algorithms to map the robot’s environment and determine its position. To control the robot, they employed standard methods used to drive autonomous ground vehicles.

“The part of the field that we thought we needed to innovate on was motion planning,” Everett says. “Once you figure out where you are in the world, and know how to follow trajectories, which trajectories should you be following?”

That’s a tricky problem, particularly in pedestrian-heavy environments, where individual paths are often difficult to predict. As a solution, roboticists sometimes take a trajectory-based approach, in which they program a robot to compute an optimal path that accounts for everyone’s desired trajectories. These trajectories must be inferred from sensor data, because people don’t explicitly tell the robot where they are trying to go. 

“But this takes forever to compute. Your robot is just going to be parked, figuring out what to do next, and meanwhile the person’s already moved way past it before it decides ‘I should probably go to the right,’” Everett says. “So that approach is not very realistic, especially if you want to drive faster.”

Others have used faster, “reactive-based” approaches, in which a robot is programmed with a simple model, using geometry or physics, to quickly compute a path that avoids collisions.

The problem with reactive-based approaches, Everett says, is the unpredictability of human nature — people rarely stick to a straight, geometric path, but rather weave and wander, veering off to greet a friend or grab a coffee. In such an unpredictable environment, such robots tend to collide with people or look like they are being pushed around by avoiding people excessively.

 “The knock on robots in real situations is that they might be too cautious or aggressive,” Everett says. “People don’t find them to fit into the socially accepted rules, like giving people enough space or driving at acceptable speeds, and they get more in the way than they help.”

Training days

The team found a way around such limitations, enabling the robot to adapt to unpredictable pedestrian behavior while continuously moving with the flow and following typical social codes of pedestrian conduct.

They used reinforcement learning, a type of machine learning approach, in which they performed computer simulations to train a robot to take certain paths, given the speed and trajectory of other objects in the environment. The team also incorporated social norms into this offline training phase, in which they encouraged the robot in simulations to pass on the right, and penalized the robot when it passed on the left.

“We want it to be traveling naturally among people and not be intrusive,” Everett says. “We want it to be following the same rules as everyone else.”

The advantage to reinforcement learning is that the researchers can perform these training scenarios, which take extensive time and computing power, offline. Once the robot is trained in simulation, the researchers can program it to carry out the optimal paths, identified in the simulations, when the robot recognizes a similar scenario in the real world.

The researchers enabled the robot to assess its environment and adjust its path, every one-tenth of a second. In this way, the robot can continue rolling through a hallway at a typical walking speed of 1.2 meters per second, without pausing to reprogram its route.

“We’re not planning an entire path to the goal — it doesn’t make sense to do that anymore, especially if you’re assuming the world is changing,” Everett says. “We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again. This way, we think our robot looks more natural, and is anticipating what people are doing.”

Crowd control

Everett and his colleagues test-drove the robot in the busy, winding halls of MIT’s Stata Building, where the robot was able to drive autonomously for 20 minutes at a time. It rolled smoothly with the pedestrian flow, generally keeping to the right of hallways, occasionally passing people on the left, and avoiding any collisions.

“We wanted to bring it somewhere where people were doing their everyday things, going to class, getting food, and we showed we were pretty robust to all that,” Everett says. “One time there was even a tour group, and it perfectly avoided them.”

Everett says going forward, he plans to explore how robots might handle crowds in a pedestrian environment.

“Crowds have a different dynamic than individual people, and you may have to learn something totally different if you see five people walking together,” Everett says. “There may be a social rule of, ‘Don’t move through people, don’t split people up, treat them as one mass.’ That’s something we’re looking at in the future.”

This research was funded by Ford Motor Company.  

The need for robotics standards

Last week I was talking to one lead engineer of a Singapore company which is building a benchmarking system for robot solutions. Having seen my presentation at ROSCON2016 about robot benchmarking, he asked me how I would benchmark solutions that are non-ROS compatible. I said that I wouldn’t dedicate time to benchmark solutions that are not ROS-based. Instead, I suggested I would use the time to polish the ROS-based benchmarking and suggest that vendors adopt that middleware in their products.

Benchmarks are necessary and they need standards

Benchmarks are necessary to improve any field. By having a benchmark, different solutions to a single problem can be compared and hence a direction for improvement can be traced. Currently, robotics lacks such benchmarking system.

I strongly believe that to create a benchmark for robotics we need a standard at the level of programming.

By having a standard at the level of programming, manufacturers can build their own hardware solutions at will, as long as they are programmable with the programming standard. That is the approach taken by devices that can be plugged into a computer. Manufacturers create the product on their own terms, and then provide a Windows driver that allows any computer in the world (that runs Windows) to communicate with the product. Once this computer-to-product communication is made, you can create programs that compare the same type of devices from different manufacturers for performance, quality, noise, whatever your benchmark is trying to compare.

You see? Different types of devices, different types of hardware. But all of them can be compared through the same benchmarking system that relies on the Windows standard.

Software development for robots also needs standards

The need for standards is not only required for comparing solutions but also to speed robotics development. By having a robotics standard, developers can concentrate on building solutions that do not have to be re-implemented whenever the robot hardware changes. Actually, given the middleware structure, developers can disassociate enough from the hardware that they can almost spend 100% of their time in the software realm, while still developing code for robots.

We need the same type of standard for robotics. We need a kind of operating system that allows us to compare different robotics solutions. We need the Windows of the PCs, the Android of the phones, the CAN of the buses…

IMG_0154

A few standard proposals and a winner

But you already know that. I’m not the first one to state this. Actually, many people have already tried to create such a standard. Some examples include Player, ROS, YARP, OROCOS, Urbi, MIRA or JdE Robot, to name a few.

Personally, I actually don’t care which standard is used. It could be ROS, it could be YARP, or it could be any other that still has not been created. The only thing I really care about is that we  adopt a standard as soon as possible.

And it looks like the developers have decided. Robotics developers prefer ROS as their common middleware to program robots.

No other middleware for robotics has had such a large adoption. Some data about it:

ROS YARP OROCOS
Number of Google pages: 243.000 37.000 42.000
Number of citations if the paper describing the middleware: 3.638 463 563
Alexa ranking: 14.118Screenshot from 2017-08-24 19:50:39 1.505.000Screenshot from 2017-08-24 19:50:29 668.293Screenshot from 2017-08-24 19:50:19

Note 1: Only showing the current big three players.

Note 2: Very simple comparison. Difficult to compare in other terms since data is not available.

Note 3: Data measured in August 2017. May vary at the time you are reading this. Links provided on the numbers themselves, so you can check yourself.

This is not only the feeling that we, roboticists, have. The numbers also indicate that ROS is becoming the standard for robotics programming.

Screenshot from 2017-08-24 19:25:59

Why ROS?

The question is then, why has ROS emerged on top of all the other possible contestants. None of them is worst than ROS in terms of features. Actually you can find some feature in all the other middlewares that outperform ROS. If that is so, why or how has ROS achieved the status of becoming the standard ?

A simple answer from my point of view: excellent learning tutorials and debugging tools.

1

 

Here there is a video where Leila Takayama, early developer of ROS, explains when she realized that the key for having ROS used worldwide would be to provide tools that simplify the reuse of ROS code. None of the other projects have such a set of clear and structured tutorials. In addition, few of the other middlewares provide debugging tools for their packages. The lack of these two essential aspects is preventing new people from using their middlewares (even if I understand the developers of OROCOS and YARP for not providing it… who wants to write tutorials or build debugging tools… nobody! ? )

 

Additionally, it is not only about tutorials and debugging tools. ROS creators also provide a good system of managing packages. The result is that developers worldwide could use others packages in a (relatively) easy way. This created an explosion in ROS packages available, providing off-the-shelf almost anything for your brand new ROSified robot.

Now, the impressive rate at which contributions to the ROS ecosystem are made makes it almost unstoppable in terms of growing.

Screenshot from 2017-02-23 20:39:27

 

What about companies?

At the beginning, ROS was mostly used by students at Universities. However, as ROS becomes more mature and the number of packages increases, companies are realizing that adopting ROS is also good for them because they will be able to use code developed by others. On top of that, it will be easier for them to hire new engineers who already know the middleware (otherwise they would need to teach the newcomers their own middleware).

As a result, many companies have jumped onto the ROS train, developing from scratch their products to be ROS compatible. Examples include Robotnik, Fetch Robotics, Savioke, Pal Robotics, Yujin Robots, The Construct, Rapyuta Robotics, Erle Robotics, Shadow Robot or Clearpath, to name a few of the sponsors of the next ROSCON ? . Creating their ROS-compatible products, they decreased their development time by several orders of magnitude.

To bring things further, two Spanish companies have revolutionised the standardization of robotics products using ROS middleware: on one side, Robotnik has created the ROS Components shop. A shop where anyone can buy ROS compatible devices, starting from mobile bases to sensors or actuators. On the other side, Erle Robotics (now Acutronic Robotics) is in the process of developing Hardware ROS. The H-ROS is a standardized software and hardware infrastructure to easily create reusable and reconfigurable robot hardware parts. ROS is enabling hardware standarization too, but this time driven by companies, not research! That must mean something…

Screenshot from 2017-08-24 22:25:45

Finally, it looks like industrial robot manufacturers have understood the value that a standard can provide to their business. Even if they do not make their industrial robots ROS-enabled from the start, they are adopting ROS Industrial  a flavour of ROS, which allows them to ROSify their industrial robots and re-use all the software created for manipulators in the ROS ecosystem.

But are all companies jumping onto the ROS train? Not all of them!

Some companies like Jibo, Aldebaran or Google still do not rely on ROS for their robot programming. Some of them rely on their own middleware created before the existence of ROS  (that is the case of Aldebaran). Some others, though, are creating their own middleware from scratch. Their reasons: they do not believe ROS is good, they have already created a middleware, or do not want to develop their products dependent on the middleware of others. Those companies have very fair reasons to go their way. However, will that make them competitive? (if we have to judge from history, mobiles, VCRs, the answer may be no).

So is ROS the standard for programming robots?

That question is still too soon to be answered. It looks like it is becoming the standard, but many things can change. It is unlikely that another middleware takes the current title from ROS, but it may happen. There could be a new player that wipes ROS from the map (maybe Google will release its middleware to the public – like they did with Android – and take the sector by storm?).

Still, ROS has its problems, like a lack of security or the instability of some important packages. Even if the OSRF group are working hard to build a better ROS system (for instance ROS2 is in beta phase with many root improvements), some hard work is still required for some basic things (like the ROS controllers for real robots).

IMG_3330

Given those numbers, at The Construct we believe that ROS IS THE STANDARD (that is why we are devoted to creating the best ROS learning tutorials of the world). Actually, it was thanks to this standardization that two Barcelona students were able to create an autonomous robot product for coffee shops in only three months with zero knowledge of robotics (see Barista robot).

This is the future, and it is good. In this future, thanks to standards, almost anyone will be able to build, design and program their own robotics product, similar to how PC stores are building computers today.

So my advice, as I said to the Singapore engineer, is to bet on ROS. Right now, it is the best option for a robotics standard.

 

Page 413 of 427
1 411 412 413 414 415 427