Archive 05.12.2017

Page 4 of 26
1 2 3 4 5 6 26

Inertial-Grade MEMS Capacitive Accelerometers

Press Release by Silicon Designs:

Silicon Designs Introduces Inertial-Grade MEMS Capacitive Accelerometers
with Internal Temperature Sensor and Improved Low-Noise Performance
 
Five Full Standard G-Ranges from ±2 g to ±50 g Now Available for Immediate Customer Shipment
November 9, 2017, Kirkland, Washington, USA – Silicon Designs, Inc. (www.SiliconDesigns.com), a 100% veteran owned, U.S. based leading designer and manufacturer of highly rugged MEMS capacitive accelerometer chips and modules, today announced the immediate availability of its Model 1525 Series, a family of commercial and inertial-grade MEMS capacitive accelerometers, offering industry-best-in-class low-noise performance.
Design of the Model 1525 Series incorporates Silicon Designs’ own high-performance MEMS variable capacitive sense element, along with a ±4.0V differential analog output stage, internal temperature sensor and integral sense amplifier — all housed within a miniature, nitrogen damped, hermetically sealed, surface mounted J-lead LCC-20 ceramic package (U.S. Export Classification ECCN 7A994). The 1525 Series features low-power (+5 VDC, 5 mA) operation, excellent in-run bias stability, and zero cross-coupling. Five unique full-scale ranges, of ±2 g, ±5 g, ±10 g, ±25 g, and ±50 g, are currently in production and available for immediate customer shipment. Each MEMS accelerometer offers reliable performance over a standard operating temperature range of -40° C to +85° C. Units are also relatively insensitive to wide temperature changes and gradients. Each device is marked with a serial number on its top and bottom surfaces for traceability. A calibration test sheet is supplied with each unit, showing measured bias, scale factor, linearity, operating current, and frequency response.
Carefully regulated manufacturing processes ensure that each sensor is made to be virtually identical, allowing users to swap out parts in the same g range with few-to-no testing modifications, further saving time and resources. This provides test engineers with a quick plug-and-play solution for almost any application, with total trust in sensor accuracy when used within published specifications. As the OEM of its own MEMS capacitive accelerometer chips and modules, Silicon Designs further ensures the manufacture of consistently high-quality products, with full in-house customization capabilities to customer exacting standards.  This flexibility ensures that Silicon Designs can expeditiously design, develop and manufacture high-quality standard and custom MEMS capacitive accelerometers, yet still keep prices highly competitive.
Photo By: Silicon Designs – www.silicondesigns.com
The Silicon Designs Model 1525 Series tactical grade MEMS inertial accelerometer family is ideal for zero-to-medium frequency instrumentation applications that require high-repeatability, low noise, and maximum stability, including tactical guidance systems, navigation and control systems (GN&C), AHRS, unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), remotely operated vehicles (ROVs), robotic controllers, flight control systems, and marine- and land-based navigational systems. They may also be used to support critical industrial test requirements, such as those common to agricultural, oil and gas drilling, photographic and meteorological drones, as well as seismic and inertial measurements.
Since 1983, the privately held Silicon Designs has served as leading industry experts in the design, development and manufacture of highly rugged MEMS capacitive accelerometers and chips with integrated amplification, operating from its state-of-the-art facility near Seattle, Washington, USA. From the company’s earliest days, developing classified components for the United States Navy under a Small Business Innovation and Research (SBIR) grant, to its later Tibbetts Award and induction into the Space Technology Hall of Fame, Silicon Designs applies nearly 35 years of MEMS R&D innovation and applications engineering expertise into all finished product designs. For additional information on the Model 1525 Series or other MEMS capacitive sensing technologies offered by Silicon Designs, visit www.silicondesigns.com.
-###-
About Silicon Designs, Inc.
Silicon Designs was founded in 1983 with the goal of improving the accepted design standard for traditional MEMS capacitive accelerometers. At that time, industrial-grade accelerometers were bulky, fragile and costly.  The engineering team at Silicon Designs listened to the needs of customers who required more compact, sensitive, rugged and reasonably priced accelerometer modules and chips, though which also offered higher performance.  Resultant product lines were designed and built to surpass customer expectations. The company has grown steadily over the years, while its core competency remains accelerometers, with the core business philosophies of “make it better, stronger, smaller and less expensive” and “let the customer drive R&D” maintained to this day. 

The post Inertial-Grade MEMS Capacitive Accelerometers appeared first on Roboticmagazine.

Report from the AI Race Avoidance Workshop

GoodAI and AI Roadmap Institute
Tokyo, ARAYA headquarters, October 13, 2017

Authors: Marek Rosa, Olga Afanasjeva, Will Millership (GoodAI)

Workshop participants: Olga Afanasjeva (GoodAI), Shahar Avin (CSER), Vlado Bužek (Slovak Academy of Science), Stephen Cave (CFI), Arisa Ema (University of Tokyo), Ayako Fukui (Araya), Danit Gal (Peking University), Nicholas Guttenberg (Araya), Ryota Kanai (Araya), George Musser (Scientific American), Seán Ó hÉigeartaigh (CSER), Marek Rosa (GoodAI), Jaan Tallinn (CSER, FLI), Hiroshi Yamakawa (Dwango AI Laboratory)

Summary

It is important to address the potential pitfalls of a race for transformative AI, where:

  • Key stakeholders, including the developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization
  • The fruits of the technology won’t be shared by the majority of people to benefit humanity, but only by a selected few

Race dynamics may develop regardless of the motivations of the actors. For example, actors may be aiming to develop a transformative AI as fast as possible to help humanity, to achieve economic dominance, or even to reduce costs of development.

There is already an interest in mitigating potential risks. We are trying to engage more stakeholders and foster cross-disciplinary global discussion.

We held a workshop in Tokyo where we discussed many questions and came up with new ones which will help facilitate further work.

The General AI Challenge Round 2: Race Avoidance will launch on 18 January 2018, to crowdsource mitigation strategies for risks associated with the AI race.

What we can do today:

  • Study and better understand the dynamics of the AI race
  • Figure out how to incentivize actors to cooperate
  • Build stronger trust in the global community by fostering discussions between diverse stakeholders (including individuals, groups, private and public sector actors) and being as transparent as possible in our own roadmaps and motivations
  • Avoid fearmongering around both AI and AGI which could lead to overregulation
  • Discuss the optimal governance structure for AI development, including the advantages and limitations of various mechanisms such as regulation, self-regulation, and structured incentives
  • Call to action — get involved with the development of the next round of the General AI Challenge

Introduction

Research and development in fundamental and applied artificial intelligence is making encouraging progress. Within the research community, there is a growing effort to make progress towards general artificial intelligence (AGI). AI is being recognized as a strategic priority by a range of actors, including representatives of various businesses, private research groups, companies, and governments. This progress may lead to an apparent AI race, where stakeholders compete to be the first to develop and deploy a sufficiently transformative AI [1,2,3,4,5]. Such a system could be either AGI, able to perform a broad set of intellectual tasks while continually improving itself, or sufficiently powerful specialized AIs.

“Business as usual” progress in narrow AI is unlikely to confer transformative advantages. This means that although it is likely that we will see an increase in competitive pressures, which may have negative impacts on cooperation around guiding the impacts of AI, such continued progress is unlikely to spark a “winner takes all” race. It is unclear whether AGI will be achieved in the coming decades, or whether specialised AIs would confer sufficient transformative advantages to precipitate a race of this nature. There seems to be less potential of a race among public actors trying to address current societal challenges. However, even in this domain there is a strong business interest which may in turn lead to race dynamics. Therefore, at present it is prudent not to rule out any of these future possibilities.

The issue has been raised that such a race could create incentives to neglect either safety procedures, or established agreements between key players for the sake of gaining first mover advantage and controlling the technology [1]. Unless we find strong incentives for various parties to cooperate, at least to some degree, there is also a risk that the fruits of transformative AI won’t be shared by the majority of people to benefit humanity, but only by a selected few.

We believe that at the moment people present a greater risk than AI itself, and that AI risks-associated fearmongering in the media can only damage constructive dialogue.

Workshop and the General AI Challenge

GoodAI and the AI Roadmap Institute organized a workshop in the Araya office in Tokyo, on October 13, 2017, to foster interdisciplinary discussion on how to avoid pitfalls of such an AI race.

Workshops like this are also being used to help prepare the AI Race Avoidance round of the General AI Challenge which will launch on 18 January 2018.

The worldwide General AI Challenge, founded by GoodAI, aims to tackle this difficult problem via citizen science, promote AI safety research beyond the boundaries of the relatively small AI safety community, and encourage an interdisciplinary approach.

Why are we doing this workshop and challenge?

With race dynamics emerging, we believe we are still at a time where key stakeholders can effectively address the potential pitfalls.

  • Primary objective: find a solution to problems associated with the AI race
  • Secondary objective: develop a better understanding of race dynamics including issues of cooperation and competition, value propagation, value alignment and incentivisation. This knowledge can be used to shape the future of people, our team (or any team), and our partners. We can also learn to better align the value systems of members of our teams and alliances

It’s possible that through this process we won’t find an optimal solution, but a set of proposals that could move us a few steps closer to our goal.

This post follows on from a previous blogpost and workshop Avoiding the Precipice: Race Avoidance in the Development of Artificial General Intelligence [6].

Topics and questions addressed at the workshop

General question: How can we avoid AI research becoming a race between researchers, developers, companies, governments and other stakeholders, where:

  • Safety gets neglected or established agreements are defied
  • The fruits of the technology are not shared by the majority of people to benefit humanity, but only by a selected few

At the workshop, we focused on:

  • Better understanding and mapping the AI race: answering questions (see below) and identifying other relevant questions
  • Designing the AI Race Avoidance round of the General AI Challenge (creating a timeline, discussing potential tasks and success criteria, and identifying possible areas of friction)

We are continually updating the list of AI race-related questions (see appendix), which will be addressed further in the General AI Challenge, future workshops and research.

Below are some of the main topics discussed at the workshop.

1) How can we better understand the race?

  • Create and understand frameworks for discussing and formalizing AI race questions
  • Identify the general principles behind the race. Study meta-patterns from other races in history to help identify areas that will need to be addressed
  • Use first-principle thinking to break down the problem into pieces and stimulate creative solutions
  • Define clear timelines for discussion and clarify the motivation of actors
  • Value propagation is key. Whoever wants to advance, needs to develop robust value propagation strategies
  • Resource allocation is also key to maximizing the likelihood of propagating one’s values
  • Detailed roadmaps with clear targets and open-ended roadmaps (where progress is not measured by how close the state is to the target) are both valuable tools to understanding the race and attempting to solve issues
  • Can simulation games be developed to better understand the race problem? Shahar Avin is in the process of developing a “Superintelligence mod” for the video game Civilization 5, and Frank Lantz of the NYU Game Center came up with a simple game where the user is an AI developing paperclips

2) Is the AI race really a negative thing?

  • Competition is natural and we find it in almost all areas of life. It can encourage actors to focus, and it lifts up the best solutions
  • The AI race itself could be seen as a useful stimulus
  • It is perhaps not desirable to “avoid” the AI race but rather to manage or guide it
  • Is compromise and consensus good? If actors over-compromise, the end result could be too diluted to make an impact, and not exactly what anyone wanted
  • Unjustified negative escalation in the media around the race could lead to unnecessarily stringent regulations
  • As we see race dynamics emerge, the key question is if the future will be aligned with most of humanity’s values. We must acknowledge that defining universal human values is challenging, considering that multiple viewpoints exist on even fundamental values such as human rights and privacy. This is a question that should be addressed before attempting to align AI with a set of values

3) Who are the actors and what are their roles?

  • Who is not part of the discussion yet? Who should be?
  • The people who will implement AI race mitigation policies and guidelines will be the people working on them right now
  • Military and big companies will be involved. Not because we necessarily want them to shape the future, but they are key stakeholders
  • Which existing research and development centers, governments, states, intergovernmental organizations, companies and even unknown players will be the most important?
  • What is the role of media in the AI race, how can they help and how can they damage progress?
  • Future generations should also be recognized as stakeholders who will be affected by decisions made today
  • Regulation can be viewed as an attempt to limit the future more intelligent or more powerful actors. Therefore, to avoid conflict, it’s important to make sure that any necessary regulations are well thought-through and beneficial for all actors

4) What are the incentives to cooperate on AI?

One of the exercises at the workshop was to analyze:

  • What are motivations of key stakeholders?
  • What are the levers they have to promote their goals?
  • What could be their incentives to cooperate with other actors?

One of the prerequisites for effective cooperation is a sufficient level of trust:

  • How do we define and measure trust?
  • How can we develop trust among all stakeholders — inside and outside the AI community?

Predictability is an important factor. Actors who are open about their value system, transparent in their goals and ways of achieving them, and who are consistent in their actions, have better chances of creating functional and lasting alliances.

5) How could the race unfold?

Workshop participants put forward multiple viewpoints on the nature of the AI race and a range of scenarios of how it might unfold.

As an example, below are two possible trajectories of the race to general AI:

  • Winner takes all: one dominant actor holds an AGI monopoly and is years ahead of everyone. This is likely to follow a path of transformative AGI (see diagram below).

Example: Similar technology advantages have played an important role in geopolitics in the past. For example, by 1900 Great Britain, with only 40 million people, managed to capitalise the advantage of technological innovation creating an empire of about one quarter of the Earth’s land and population [7].

  • Co-evolutionary development: many actors on similar level of R&D racing incrementally towards AGI.

Example: This direction would be similar to the first stage of space exploration when two actors (the Soviet Union and the United States) were developing and successfully putting in use a competing technology.

Other considerations:

  • We could enter a race towards incrementally more capable narrow AI (not a “winner takes all” scenario: grab AI talent)
  • We are in multiple races to have incremental leadership on different types of narrow AI. Therefore we need to be aware of different risks accompanying different races
  • The dynamics will be changing as different races evolve

The diagram below explores some of the potential pathways from the perspective of how the AI itself might look. It depicts beliefs about three possible directions that the development of AI may progress in. Roadmaps of assumptions of AI development, like this one, can be used to think of what steps we can take today to achieve a beneficial future even under adversarial conditions and different beliefs.

Click here for full-size image

Legend:

  • Transformative AGI path: any AGI that will lead to dramatic and swift paradigm shifts in society. This is likely to be a “winner takes all” scenario.
  • Swiss Army Knife AGI path: a powerful (can be also decentralized) system made up of individual expert components, a collection of narrow AIs. Such AGI scenario could mean more balance of power in practice (each stakeholder will be controlling their domain of expertise, or components of the “knife”). This is likely to be a co-evolutionary path.
  • Narrow AI path: in this path, progress does not indicate proximity to AGI and it is likely to see companies racing to create the most powerful possible narrow AIs for various tasks.

Current race assumption in 2017

Assumption: We are in a race to incrementally more capable narrow AI (not a “winner takes all” scenario: grab AI talent)

  • Counter-assumption: We are in a race to “incremental” AGI (not a “winner takes all” scenario)
  • Counter-assumption: We are in a race to recursive AGI (winner takes all)
  • Counter-assumption: We are in multiple races to have incremental leadership on different types of “narrow” AI

Foreseeable future assumption

Assumption: At some point (possibly 15 years) we will enter a widely-recognised race to a “winner takes all” scenario of recursive AGI

  • Counter-assumption: In 15 years, we continue incremental (not a “winner takes all” scenario) race on narrow AI or non-recursive AGI
  • Counter-assumption: In 15 years, we enter a limited “winner takes all” race to certain narrow AI or non-recursive AGI capabilities
  • Counter-assumption: The overwhelming “winner takes all” is avoided by the total upper limit of available resources that support intelligence

Other assumptions and counter-assumptions of race to AGI

Assumption: Developing AGI will take a large, well-funded, infrastructure-heavy project

  • Counter-assumption: A few key insights will be critical and they could come from small groups. For example, Google Search which was not invented inside a well known established company but started from scratch and revolutionized the landscape
  • Counter-assumption: Small groups can also layer key insights onto existing work of bigger groups

Assumption: AI/AGI will require large datasets and other limiting factors

  • Counter-assumption: AGI will be able to learn from real and virtual environments and a small number of examples the same way humans can

Assumption: AGI and its creators will be easily controlled by limitations on money, political leverage and other factors

  • Counter-assumption: AGI can be used to generate money on the stock market

Assumption: Recursive improvement will proceed linearly/diminishing returns (e.g. learning to learn by gradient descent by gradient descent)

  • Counter-assumption: At a certain point in generality and cognitive capability, recursive self-improvement may begin to improve more quickly than linearly, precipitating an “intelligence explosion”

Assumption: Researcher talent will be key limiting factor in AGI development

  • Counter-assumption: Government involvement, funding, infrastructure, computational resources and leverage are all also potential limiting factors

Assumption: AGI will be a singular broad-intelligence agent

  • Counter-assumption: AGI will be a set of modular components (each limited/narrow) but capable of generality in combination
  • Counter-assumption: AGI will be an even wider set of technological capabilities than the above

6) Why search for AI race solution publicly?

  • Transparency allows everyone to learn about the topic, nothing is hidden. This leads to more trust
  • Inclusion — all people from across different disciplines are encouraged to get involved because it’s relevant to every person alive
  • If the race is taking place, we won’t achieve anything by not discussing it, especially if the aim is to ensure a beneficial future for everyone

Fear of an immediate threat is a big motivator to get people to act. However, behavioral psychology tells us that in the long term a more positive approach may work best to motivate stakeholders. Positive public discussion can also help avoid fearmongering in the media.

7) What future do we want?

  • Consensus might be hard to find and also might not be practical or desirable
  • AI race mitigation is basically an insurance. A way to avoid unhappy futures (this may be easier than maximization of all happy futures)
  • Even those who think they will be a winner may end up second, and thus it’s beneficial for them to consider the race dynamics
  • In the future it is desirable to avoid the “winner takes all” scenario and make it possible for more than one actor to survive and utilize AI (or in other words, it needs to be okay to come second in the race or not to win at all)
  • One way to describe a desired future is where the happiness of each next generation is greater than the happiness of a previous generation

We are aiming to create a better future and make sure AI is used to improve the lives of as many people as possible [8]. However, it is difficult to envisage exactly what this future will look like.

One way of envisioning this could be to use a “veil of ignorance” thought experiment. If all the stakeholders involved in developing transformative AI assume they will not be the first to create it, or that they would not be involved at all, they are likely to create rules and regulations which are beneficial to humanity as a whole, rather than be blinded by their own self interest.

AI Race Avoidance challenge

In the workshop we discussed the next steps for Round 2 of the General AI Challenge.

About the AI Race Avoidance round

  • Although this post has used the title AI Race Avoidance, it is likely to change. As discussed above, we are not proposing to avoid the race but rather to guide, manage or mitigate the pitfalls. We will be working on a better title with our partners before the release.
  • The round has been postponed until 18 January 2018. The extra time allows more partners, and the public, to get involved in the design of the round to make it as comprehensive as possible.
  • The aim of the round is to raise awareness, discuss the topic, get as diverse an idea pool as possible and hopefully to find a solution or a set of solutions.

Submissions

  • The round is expected to run for several months, and can be repeated
  • Desired outcome: next-steps or essays, proposed solutions or frameworks for analyzing AI race questions
  • Submissions could be very open-ended
  • Submissions can include meta-solutions, ideas for future rounds, frameworks, convergent or open-ended roadmaps with various level of detail
  • Submissions must have a two page summary and, if needed, a longer/unlimited submission
  • No limit on number of submissions per participant

Judges and evaluation

  • We are actively trying to ensure diversity on our judging panel. We believe it is important to have people from different cultures, backgrounds, genders and industries representing a diverse range of ideas and values
  • The panel will judge the submissions on how they are maximizing the chances of a positive future for humanity
  • Specifications of this round are work in progress

Next steps

  • Prepare for the launch of AI Race Avoidance round of the General AI Challenge in cooperation with our partners on 18 January 2018
  • Continue organizing workshops on AI race topics with participation of various international stakeholders
  • Promote cooperation: focus on establishing and strengthening trust among the stakeholders across the globe. Transparency in goals facilitates trust. Just like we would trust an AI system if its decision making is transparent and predictable, the same applies to humans

Call to action

At GoodAI we are open to new ideas about how AI Race Avoidance round of the General AI Challenge should look. We would love to hear from you if you have any suggestions on how the round should be structured, or if you think we have missed any important questions on our list below.

In the meantime we would be grateful if you could share the news about this upcoming round of the General AI Challenge with anyone you think might be interested.

Appendix

More questions about the AI race

Below is a list of some more of the key questions we will expect to see tackled in Round 2: AI Race Avoidance of the General AI Challenge. We have split them into three categories: Incentive to cooperate, What to do today, and Safety and security.

Incentive to cooperate:

  • How to incentivise the AI race winner to obey any related previous agreements and/or share the benefits of transformative AI with others?
  • What is the incentive to enter and stay in an alliance?
  • We understand that cooperation is important in moving forward safely. However, what if other actors do not understand its importance, or refuse to cooperate? How can we guarantee a safe future if there are unknown non-cooperators?
  • Looking at the problems across different scales, the pain points are similar even at the level of internal team dynamics. We need to invent robust mechanisms for cooperation between: individual team members, teams, companies, corporations and governments. How do we do this?
  • When considering various incentives for safety-focused development, we need to find a robust incentive (or a combination of such) that would push even unknown actors towards beneficial AGI, or at least an AGI that can be controlled. How?

What to do today:

  • How to reduce the danger of regulation over-shooting and unreasonable political control?
  • What role might states have in the future economy and which strategies are they assuming/can assume today, in terms of their involvement in AI or AGI development?
  • With regards to the AI weapons race, is a ban on autonomous weapons a good idea? What if other parties don’t follow the ban?
  • If regulation overshoots by creating unacceptable conditions for regulated actors, the actors may decide to ignore the regulation and bear the risk of potential penalties. For example, total prohibition of alcohol or gambling may lead to displacement of the activities to illegal areas, while well designed regulation can actually help reduce the most negative impacts such as developing addiction.
  • AI safety research needs to be promoted beyond the boundaries of the small AI safety community and tackled interdisciplinarily. There needs to be active cooperation between safety experts, industry leaders and states to avoid negative scenarios. How?

Safety and security:

  • What level of transparency is optimal and how do we demonstrate transparency?
  • Impact of openness: how open shall we be in publishing “solutions” to the AI race?
  • How do we stop the first developers of AGI becoming a target?
  • How can we safeguard against malignant use of AI or AGI?

Related questions

  • What is the profile of a developer who can solve general AI?
  • Who is a bigger danger: people or AI?
  • How would the AI race winner use the newly gained power to dominate existing structures? Will they have a reason to interact with them at all?
  • Universal basic income?
  • Is there something beyond intelligence? Intelligence 2.0
  • End-game: convergence or open-ended?
  • What would an AGI creator desire, given the possibility of building an AGI within one month/year?
  • Are there any goods or services that an AGI creator would need immediately after building an AGI system?
  • What might be the goals of AGI creators?
  • What are the possibilities of those that develop AGI first without the world knowing?
  • What are the possibilities of those that develop AGI first while engaged in sharing their research/results?
  • What would make an AGI creator share their results, despite having the capability of mass destruction (e.g. Internet paralysis) (The developer’s intentions might not be evil, but his defense to “nationalization” might logically be a show of force)
  • Are we capable of creating such a model of cooperation in which the creator of an AGI would reap the most benefits, while at the same time be protected from others? Does a scenario exist in which a software developer monetarily benefits from free distribution of their software?
  • How to prevent usurpation of AGI by governments and armies? (i.e. an attempt at exclusive ownership)

References

[1] Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI & SOCIETY, 31(2), 201–206.

[2] Baum, S. D. (2016). On the promotion of safe and socially beneficial artificial intelligence. AI and Society (2011), 1–9.

[3] Bostrom, N. (2017). Strategic Implications of Openness in AI Development. Global Policy, 8(2), 135–148.

[4] Geist, E. M. (2016). It’s already too late to stop the AI arms race — We must manage it instead. Bulletin of the Atomic Scientists, 72(5), 318–321.

[5] Conn, A. (2017). Can AI Remain Safe as Companies Race to Develop It?

[6] AI Roadmap Institute (2017). AVOIDING THE PRECIPICE: Race Avoidance in the Development of Artificial General Intelligence.

[7] Allen, Greg, and Taniel Chan. Artificial Intelligence and National Security. Report. Harvard Kennedy School, Harvard University. Boston, MA, 2017.

[8] Future of Life Institute. (2017). ASILOMAR AI PRINCIPLES developed in conjunction with the 2017 Asilomar conference.

Other links:


Report from the AI Race Avoidance Workshop was originally published in AI Roadmap Institute Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Vote for your favorite in Robot Launch Startup Competition!

In the lead up to the finals of the Robot Launch 2017 competition on December 14, we’re having one round of public voting for your favorite startup from the Top 25. While in previous years we’ve had public voting for all the startups, running alongside the investor judging, this year it’s an opt-in, because many of the startups seeking investment are not yet ready to publicize. Each year the startups get better and better, so we can’t wait to see who you think is the best! Make sure you vote for your favorite – below – by 6pm PST,  10 December and spread the word through social media using #robotlaunch2017.

Vote for Robohub Choice! (in alphabetic order)


BotsAndUs | uk (@botsandus)

BotsAndUs believe in humans and robots collaborating towards a better life. Our aim is to create physical and emotional comfort with robots to support wide adoption.

In May ‘17 we launched Bo, a social robot for events, hospitality and retail. Bo approaches you in shops, hotels or hospitals, finds out what you need, takes you to it and gives you tips on the latest bargains.

In a short time the business has grown considerably: global brands as customers (British Telecom, Etisalat, Dixons), a Government award for our Human-Robot-Interaction tech, members of Nvidia’s Inception program and intuAccelerate (bringing Bo to UK’s top 10 malls), >15k Bo interactions.

https://youtu.be/jrLaoKShKT4


C2RO | canada (@C2RO_Robotics)

C2RO (Collaborative Cloud Robotics) has developed a cloud-based software platform that uses real-time data processing technologies to provide AI-enabled solutions for robots. It dramatically augments the perceptive, cognitive and collaborative abilities of robots with a software-only solution that is portable to any cloud environment. C2RO is releasing it’s Beta offering in November 2017, has over 40 organizations signed up for early access, and is currently working with 4 lead customers on HW integrations and joint marketing.

no video


Kinema Systems Inc. | usa (@KinemaSystems)

Kinema Systems has developed Kinema Pick, the world’s first deep-learning based 3D Vision system for robotic picking tasks in logistics and manufacturing. Kinema Pick is used for picking boxes off pallets onto conveyors with little a-priori knowledge of the types of boxes and their arrangement on the pallet. Kinema Pick requires minimal training for new boxes. Kinema Pick uses 3D workcell information and motion planning to be self-driving, requiring no programming for new workcells. The founders and employees of Kinema Pick include veterans of Willow Garage, SRI, Apple and KTH who created MoveIt!, ROS-Control, SimTrack and other open-source packages used by thousands of companies, researchers and start-ups around the world.

https://youtu.be/PrQc-od2jeY


Mothership Aeronautics| usa (@mothershipaero)

The future is here. Mothership’s solar powered airship will enable robotic aerial persistence by serving as a charging/docking station and communications hub for drones. This enables not only a globally connected logistical network with 1 hour delivery on any product or service but also flying charging stations for flying cars. Imagine a Tesla supercharger network in the sky.

Our first stepping stone to this future is a solar powered airship for long range aerial data collection to tackle the troublesome linear infrastructure inspection market.

A vote for mothership is a vote for the Jetsons future we were promised.

https://www.youtube.com/watch?v=FsLM8vy7aDo&t=1s


Northstar Robotics | canada (@northstarrobot)

Northstar Robotics is an agricultural technology company that was founded by an experienced farmer and robotics engineer.

Our vision is to create the fully autonomous farm which will address the labour shortage problem and lower farm input costs.  We will make this vision a reality by first providing an open hardware and software platform to allow current farm equipment to become autonomous.  In parallel, we are going to build super awesome robots that will transform farming and set the standard for what modern agricultural equipment should be.

https://youtu.be/o2C4Cx-m2es


Tatu Robotics Pty Ltd | australia (@clintonburchat)

BLKTATU Autonomous drone delivery platform using computer vision allowing deliveries to hard to reach places like highrise buildings and apartments. We deliver to where you are, autonomously.

https://youtu.be/2l7x2-xJ2As


Tennibot | usa (@tennibot)

Tennibot is the world’s first autonomous ball collector. It perfectly integrates computer vision and robotics to offer tennis players and coaches an innovative solution to a tedious task: picking up balls during practice. The Tennibot saves valuable time that is currently wasted bending over for balls. It allows the user to focus on hitting and let the robot take care of the hard work. Tennibot stays out of the way of players and works silently in an area specified by the user. It also comes with a companion app that gives the user full control of their personal ball boy.

https://youtu.be/BcHl1RKVhaM


UniExo | ukraine 

UniExo aims to help people with injuries and movement problems to restore the motor functions of their bodies with modular robotic exoskeleton devices, without additional help of doctors.

Thanks to our device, with its advantages, we can help these users in rehabilitation. The use of the product provides free movement for people with disabilities in a comfortable and safe form for them, without the use of outside help, as well as people in the post-opined period, post-traumatic state, being on rehabilitation.

We can give a second chance to people for a normal life, and motivate to do things for our world that can help other people.

https://youtu.be/kjHN35zasvE


Woobo| usa (@askwoobo)

Woobo unfolds a world of imagination, fun, and knowledge to children, bringing the magic of a robot companion into children’s life. Relying on cutting-edge robotics and AI technologies, our team is aiming to realize the dream of millions of children – bringing them a fluffy and soft buddy that can talk to them, amuse them, inspire them, and learn along with them. For parents, Woobo is an intelligent assistant with customized content that can help entertain, educate, and engage children, as well as further strengthen the parent-child bond.

https://youtu.be/Z_ip6nigzDg

CAST YOUR VOTE FOR “ROBOHUB CHOICE”

VOTING CLOSES ON SUNDAY DEC 10 AT 6:00 PM [PDT]

Model-based reinforcement learning with neural network dynamics

By Anusha Nagabandi and Gregory Kahn

Enabling robots to act autonomously in the real-world is difficult. Really, really difficult. Even with expensive robots and teams of world-class researchers, robots still have difficulty autonomously navigating and interacting in complex, unstructured environments.

fig1b
Fig 1. A learned neural network dynamics model enables a hexapod robot to learn to run and follow desired trajectories, using just 17 minutes of real-world experience.

Why are autonomous robots not out in the world among us? Engineering systems that can cope with all the complexities of our world is hard. From nonlinear dynamics and partial observability to unpredictable terrain and sensor malfunctions, robots are particularly susceptible to Murphy’s law: everything that can go wrong, will go wrong. Instead of fighting Murphy’s law by coding each possible scenario that our robots may encounter, we could instead choose to embrace this possibility for failure, and enable our robots to learn from it. Learning control strategies from experience is advantageous because, unlike hand-engineered controllers, learned controllers can adapt and improve with more data. Therefore, when presented with a scenario in which everything does go wrong, although the robot will still fail, the learned controller will hopefully correct its mistake the next time it is presented with a similar scenario. In order to deal with complexities of tasks in the real world, current learning-based methods often use deep neural networks, which are powerful but not data efficient: These trial-and-error based learners will most often still fail a second time, and a third time, and often thousands to millions of times. The sample inefficiency of modern deep reinforcement learning methods is one of the main bottlenecks to leveraging learning-based methods in the real-world.

We have been investigating sample-efficient learning-based approaches with neural networks for robot control. For complex and contact-rich simulated robots, as well as real-world robots (Fig. 1), our approach is able to learn locomotion skills of trajectory-following using only minutes of data collected from the robot randomly acting in the environment. In this blog post, we’ll provide an overview of our approach and results. More details can be found in our research papers listed at the bottom of this post, including this paper with code here.

Read More

DARPA challenge mystery solved and how to handle Robocar failures

A small mystery from Robocar history was resolved recently, and revealed at the DARPA grand challenge reunion at CMU.

The story is detailed here at IEEE spectrum and I won’t repeat it all, but a brief summary goes like this.

In the 2nd grand challenge, CMU’s Highlander was a favourite and doing very well. Mid-race it started losing engine power and it stalled for long enough that Stanford’s Stanley beat it by 11 minutes.

It was discovered recently a small computerized fuel injector controller in the Hummer (one of only two) may have been damaged in a roll-over that Highlander had, and if you pressed on it, the engine would reduce power or fail.

People have wondered how the robocar world might be different if they had not had that flaw. Stanford’s victory was a great boost for their team, and Sebastian Thrun was hired to start Google’s car team — but Chris Urmson, lead on Highlander, was also hired to lead engineering, and Chris would end up staying on the project for much longer than Sebastian who got seduced by the idea of doing Udacity. Google was much more likely to have closer ties to Stanford people anyway, being where it is.

CMU’s fortunes might have ended up better, but they managed to be the main source of Uber’s first team.

There are many stories of small things making a big difference. Also well known is how Anthony Levandowski, who entered a motorcycle in the race, forgot to turn on a stabilizer. The motorcycle fell over 2 seconds after he released it, dashing all of his team’s work. Anthony of course did OK (as another leader on the Google team, and then to Uber) but of course has recently had some “trouble”.

Another famous incident came when Volvo was doing a demo for press of their collision avoidance system. You could not pick a worse time for a failure, and of course there is video of it.

They had tested the demo extensively the night before. In fact they tested it too much, and left a battery connected during the night, so that it was drained by the morning when they showed off to the press.

These stories remind people of all the ways things go wrong. More to the point, they remind us that we must design expecting things to go wrong, and have systems that are able to handle that. These early demos and prototypes didn’t have that, but cars that go on the road do and will.

Making systems resilient is the only answer when they get as complex as they are. Early car computers were pretty simple, but a self-driving system is so complex that it is never going to be formally verified or perfect. Instead, it must be expected that every part will fail, and the failure of every part — or even every combination of parts — should be tested in both simulation, and where possible in reality. What is tested is how the rest of the system handles the failure, and if it doesn’t handle it, that has to be fixed.

It does not need to handle it perfectly, though. For example, in many cases the answer to failure will be, “We’re at a reduced safety level. Let’s get off the road, and summon another car to help the passengers continue on their way.”

It might even be a severely reduced safety level. Possibly even, as hard as this number may be to accept, 100 times less safe! That’s because the car will never drive very far in that degraded condition. Consider a car that has one incident every million miles. In degraded condition, it might have an incident every 10,000 miles. You clearly won’t drive home in that condition, but the 1/4 mile of driving at degraded level is as risky as 25 miles of ordinary driving at full operational level, which is a risk taken every day. As long as vehicles do not drive more than a short distance at this degraded level, the overall safety record should still be satisfactory.

Of course, if the safety level degrades to a level that could be called “dangerous” rather than “less safe” that’s another story. That must never be allowed.

An example of this would be failure of the main sensors, such as a LIDAR. Without a LIDAR, a car would rely on cameras and radar. Companies like Tesla think they can make a car fully safe with just those two, and perhaps they will some day. But even though those are not yet safe enough, they are safe enough for a problem like getting off the road, or even getting to the next exit on a highway.

This is important because we will never get perfection. We will only get lower and lower levels of risk, and the risk will not be constant — it will be changing with road conditions, and due to system or mechanical failures. But we can still get the safety level we want — and get the technology on the road.

Humanoids 2017 photo competition and winners

The Humanoids 2017 conference earlier this month hosted an excellent photo competition. I was lucky to be one of the judges, along with Erico Guizzo from IEEE Spectrum, and Giorgio Metta as awards chair.

The decision, which was tough given the excellent submissions, was based on social media votes and scores for originality, creativity, photo structure, and tech or fun factor.

The overall winner for Best Humanoid Photo featured a pensive iCub and was entitled “To be, or not to be” by Pedro Vicente from the Vislab in Lisbon.

Title: “To be, or not to be”
Robot: iCub
Photo by: Pedro Vicente, Vislab@ISR-Lisboa

Finalists, in no particular order, were:

Title: “One who doesn’t throw the dice can never expect to score a six. One who doesn’t throw the ball can never expect to learn to juggle.”
Robot: NICO ( Neuro-Inspired COmpanion )
Photo by: Erik Strahl, Universität Hamburg (University of Hamburg, Germany).
Title: “Ready to explore (TORO accompanied by LRU, two experimental robots for verifying concepts for planetary exploration)”
Robot: Toro, LRU
Photo by: Maximo A. Roa, Christian Ott, Johannes Englsberger, Bernd Henze, Alexander Werner, Oliver Porges, DLR – German Aerospace Center
Title: “Sweaty goes Japan”
Robot: Sweaty
Photo by: Heitz, Benjamin, University Offenburg

The winner for Best Funny Humanoid was this picture of a frustrated SABIAN entitled “If only I had a self-driving car” by Marco Moscato at the Biorobotics Institute, Scuola Superiore Sant’Anna.

Title: “If only I had a self-driving car.”
Robot: SABIAN (Sant’Anna BIped humANoid)
Photo by: Marco Moscato, The Biorobotics Institute, Scuola Superiore Sant’Anna

Finalists, in no particular order, were:

Title: “NAOs’ Kindergarten :) “
Robot: Nao
Photo by: Mohsen Kaboli, Technical University of Munich (TUM).
Title: Ain’t easier than imagenet
Robot: iCub
Photo by: Lorenzo Natale, Elisa Maiettini, Vadim Tikhanoff, Istituto Italiano di Tecnologia
Title: “The Humanoids deadline is in six hours- I need those results!”
Robots: Talos and Nao
Photo by: Aljaž Kramberger, Barry Ridge, Robert Bevec, Miha Deniša, Miha Dežman, Rok Goljat and Andrej Gams, Jožef Stefan Institute.

You can see all the other photos below. Congratulations to all the participants, and to the Humanoids 2017 team for the organisation!

Title: “A journey of a thousand miles begins with a single step.” (千里之行,始於足下) , https://en.wiktionary.org/wiki/a_journey_of_a_thousand_miles_begins_with_a_single_step
Robot: SABIAN (Sant’Anna BIped humANoid)
Photo by: Marco Moscato, The Biorobotics Institute, Scuola Superiore Sant’Anna
Title: “Ready for the match of the year: Baxter VS McGregor”
Robot: Baxter
Photo by: Alessandro Albini and Simone Denei, DIBRIS (Univiersity of Genoa, Italy)
Title: “Valkyrie preparing to use a drill”
Robot: Valkyrie
Photo by: Nicholas Thoma, NASA
Title: “DYROS JET Ready for Action”
Robot: DYROS JET
Photo by: Jaehoon Sim, Seoul National University, South Korea
Title: “What’s up bro”
Robot: DYROS JET
Photo by: Jaehoon Sim, Seoul National University, South Korea
Title: “JET prefer riding”
Robot: DYROS JET
Photo by: Jaehoon Sim, Seoul National University, South Korea
Title: “Walking to the future”
Robot: DYROS RED
Photo by: Mathew Schwartz, Seoul National University
Title: “Machine Learning”
Robot: DYROS JET
Photo by: Jaehoon Sim, Seoul National University, South Korea
Title: “The Creation of Vizzy”
Robot: iCub [left], Vizzy [right]
Photo by: João Avelino, VisLab, Institute for Systems and Robotics, Instituto Superior Técnico
Title: Dialogue of generations
Robots: Romeo & HRP2
Photo by: Mehdi Benallegue, CNRS-LAAS, France
Title: Discussing the fate of humanity
Robots: Romeo & HRP2
Photo by: Mehdi Benallegue, CNRS-LAAS, France
Title: NimbRo-OP2 vs. Sweaty (RoboCup 2017 AdultSize Final)
Robots: NimbRo-OP2
Photo by: Sven Behnke, University of Bonn
Title: NimbRo-OP2 kicking
Robot: NimbRo-OP2
Photo by: Sven Behnke, University of Bonn
Title: They grow up so fast
Robot: NimbRo-OP2
Photo by: Sven Behnke, University of Bonn
Title: Posing with NimbRo-OP2
Robot: NimbRo-OP2
Photo by: Aimee Han, ROBOTIS.
Title: “Yes, I’m drunk.”
Robot: iCub
Photo by: Daniele Pucci, Istituto Italiano di Tecnologia
Title: “I’ve lost my mind for the conference”
Robot: iCub
Photo by: Daniele Pucci, Istituto Italiano di Tecnologia
Title: “Don’t look at me, I’m naked!!”
Robot: iCub
Photo by: Daniele Pucci, Istituto Italiano di Tecnologia
Title: “Discobolus”
Robot: Talos
Photo by: Carlos Viva, PAL Robotics
Title: “Walk like an egyptian”
Robot: Talos
Photo by: PAL Robotics
Title: “Soccer Champion”
Robot: Nao
Photo by: Mathew Schwartz, New Jersey Institute of Technology
Title: “Brace yourselves. iCub is coming.”
robot: iCub, the Night King
Photo by: Marco Randazzo, Istituto Italiano di Tecnologia (IIT), Genova, Italy.
Title: “Blessed among women: life is not so hard when you are a broken robot”
Robot: iCub
Photo by: Brice Clement, INRIA Nancy, France
Title: “Fainting robot (TORO getting tired of doing experiments)”
Robot: Toro
Photo by: Maximo A. Roa, Christian Ott, Johannes Englsberger, Bernd Henze, Alexander Werner, Oliver Porges, DLR – German Aerospace Center
Title: “Challenging Yoga pose (Toro getting ready for transportation)”
Robot: Toro
Photo by: Maximo A. Roa, Christian Ott, Johannes Englsberger, Bernd Henze, Alexander Werner, Oliver Porges, DLR – German Aerospace Center
Title: “Pick and install (TORO picks a part for installation on an airplane frame – project COMANOID)”
Robot: Toro
Photo by: Maximo A. Roa, Christian Ott, Johannes Englsberger, Bernd Henze, Alexander Werner, Oliver Porges, DLR – German Aerospace Center.
Title: “I could work faster, if only I had ten fingers.”
Robot: Nao
Photo by: Aljaž Kramberger, Barry Ridge, Robert Bevec, Miha Deniša, Miha Dežman, Rok Goljat and Andrej Gams, Jožef Stefan Institute.
Title: “(He)iCub and “integration””
Robots: Nao and HeiCub
Photo by: Yue Hu, Optimization, Robotics and Biomechanics (ORB), ZITI, Heidelberg University
Title: “I want a head” – the (!)sad story of a headless iCub”
Robots: Nao and HeiCub
Photo by: Yue Hu, Optimization, Robotics and Biomechanics (ORB), ZITI, Heidelberg University.
Title: If I had a robot…. – Elementary school students draw what they would like a robot do for them.
Robots: Pepper, iCub, Nao
Photo by: Wibke Borngesser, Institute for Cognitive Systems, TU München.
Title: Sweaty supports exhausted coach during RoboCup Soccer
Robot: Sweaty
Photo by: Sandra Lutz-Vogt, Univ. Appl. Sci. Offenburg

#ERW2017: “Robot Discovery” central event in tweets

The European Robotics Week 2017 (ERW2017) Central Event organised in Brussels saw the “Robots Discovery” exhibition hosted by the European Committee of the Regions on 20-23 November, where robotics experts from 30 European and regionally-funded projects outlined the impact of their work on society.

The exhibiting projects showed robots assisting during surgery or providing support for elderly care, how robots can help students develop digital skills, monitor the environment and apply agricultural chemicals with precision and less waste or how they can save lives after disasters. The #ERW2017 hashtag has reached over 1 million impressions on social media. Here’s a look at how the “Robots Discovery” central event was portrayed.

Day 1, 20 November – exhibition of robotics projects for healthcare

Day 2, 21 November – exhibition of education robotics projects

The robot bus of the Sohjoa Baltic project has arrived to the European Committee of the Regions.

The day ended with a reception hosted by First Vice-President of the European Committee of the Regions, Markku Markkula, and a concert by the Logos Robots Orchestra.

Day 3, 22 November – exhibition of robotics projects related to the environment

The Sohjoa Baltic robot bus met the public on the Esplanade of the European Parliament.

The day ended with a high-level dinner hosted by MEP Martina Werner at the European Parliament.

Day 4, 23 November – exhibition of robots for international cooperation

Day 5, 24 November – Robotics classes for children

Baudouin Hubert held robotics classes for children at the Euro Space Center:

During the #ERW2017 we also had plenty of fun with REEM C from PAL Robotics playing the piano.

Thank you to all exhibitors, organisers and event partners!

See you at European Robotics Week 2018 #ERW2018!

Robots and the two-edged blade of new technology

There’s a scare-tactic video going around on social media, and I wanted to weigh in on it—this particular video has gone from 500,000 views to almost 2 million in the past 10 days. As a matter of principle, I will not link to it. It presents a scary future in which killer robotic drones—controlled by any terrorist organization or government—run rampant.

The twin issues of killer robots and robots taking our jobs are the result of the two-edged blade of new technology, i.e., technologies that can be used for both good and evil. Should these new technologies be stopped entirely or regulated? Can they be regulated? Once you see a video like this one, one doubts whether they can ever be controlled. It’s fearful media that doesn’t say it is fake until far beyond the irresponsible level.

Videos like this one—and there are many—are produced for multiple purposes. The issues often get lost to the drama of the message. They are the result of, or fueled by, headline-hungry news sources, social media types and commercial and political strategists. This particular shock video—fake as it is—is promoting a longer, more balanced documentary and non-profit organization on the subject of stopping autonomous killing machines. Yet there are other factual videos of the U.S. military’s Perdix drones swarming just like in the shock video. Worse still, the same technologists that teach future roboticists at MIT are also developing those Perdix drones and their swarming capabilities.

My earlier career was in political strategy and I know something about the tactics of fear and manipulation—of raising doubts for manipulative purposes, as well as the real need for technologies to equalize the playing field. Again, the two-edged sword.

At the present time, we are under very real threat militarily and from the cyber world. We must invest in countering those threats and inventing new preventative weaponry. Non-militarily, jobs ARE under threat—particularly the dull, dirty and dangerous (DDD) ones easily replaced by robots and automation. In today’s global and competitive world, DDD jobs are being replaced because they are costly and inefficient. But they are also being replaced without too much consideration for those displaced.

It’s hard for me as an investor and observer (and in the past as a hands-on participant) to reconcile what I know about the state of robotics, automation and artificial intelligence today with the future use of those very same technologies.

I see the speed of change, e.g.: for many years, Google has had thousands of coders coding their self-driving system and compiling the relevant and necessary databases and models. But along comes George Hotz and other super-coders who single-handedly write code that writes code to accomplish the same thing. Code that writes code is what Elon Musk and Stephen Hawking fear, yet it is inevitable and soon will be commonplace. Ray Kurzweil named this phenomenon and claims that the ‘singularity’ will happen by 2045 with an interim milestone in 2029 when AI will achieve human levels of intelligence. Kurzweil’s forecasts, predicated on exponential technological growth, is clearly evident in the Google/Hotz example.

Pundits and experts suggest that when machines become smarter than human beings, they’ll take over the world. Kurzweil doesn’t think so. He envisions the same technology that will make AIs more intelligent giving humans a boost as well. It’s back to the two-edged sword of good and evil.

In my case, as a responsible writer and editor covering robotics, automation and artificial intelligence, I think it’s important to stay on topic, not fan the flames of fear, and to present the positive side of the sword.

Uber buys 24,000 Volvos, trolley problems get scarier, and liability

Uber and Volvo announced an agreement where Uber will buy, in time, up to 24,000 specially built Volvo XC90s which will run Uber’s self-driving software and, presumably, offer rides to Uber customers. While the rides are some time away, people have made note of this for several reasons.

  • This is a pretty big order for Volvo — it’s $1B of cars at the retail price, and 1/3 of the total sales of XC90s in 2017.
  • This is a big fleet — there are only 12,000 yellow cabs in New York City, for example, though thanks to Uber there are now far more hailable vehicles.
  • In spite of Volvo’s fairly major software efforts, they will be entirely on the hardware side for this deal, and it is not exclusive for either party.

I’m not clear who originally said it — I first heard it from Marc Andreesen — but “the truest form of a partnership is called a purchase order.” In spite of the scores of announced partnerships and joint ventures announced to get PR in the robocar space, this is a big deal, but it’s a sign of the sort of deal car makers have been afraid of. Volvo will be primarily a contract manufacturer here, and Uber will own the special sauce that makes the vehicle work, and it will own the customer. You want to be Uber in this deal, But what company can refuse a $1B order?

It also represents a big shift for Uber. Uber is often the poster child for the company that replaced assets with software. It owns no cars and yet provides the most rides. Now, Uber is going to move to the capital intensive model of owning the cars, and not having to pay drivers. There will be much debate over whether it should make such a shift. As noted, it goes against everything Uber represented in the past, but there is not really much choice.

First of all, to do things the “Uber” way would require that a large number of independent parties bought and operated robocars and then contracted out to Uber to bring them riders when not being used by their owners. Like UberX without having to drive the car. The problem is, that world is still a long way away. Car companies have put their focus on cars that can’t drive unmanned — much or at all — because that’s OK for the private car buyer. They are also far behind companies like Waymo and Uber in producing taxi capable vehicles.

If Uber waited for the pool of available private cars to get large enough, it would miss the boat. Other companies would have moved into its territory and undercut it with cheaper and cooler robotaxi service.

Secondly, you really want to be very sure about the vehicles you deploy in your first round. You want to have tested them, and you need to certify their safety because you are going to be liable in accidents no matter what you do. You can get the private owners to sign a contract taking liability but you will get sued anyway as the deep pocket if you do. This means you want to control the whole experience.

The truth is, capital is pretty cheap for companies like Uber. Even cheaper for companies like Apple and Google that have the world’s largest pools of spare capital sitting around. The main risk is that these custom robocars may not have any resale value if you bet wrong on how to build them. Fortunately, taxis wear out in about 5 years of heavy use.

Uber continues to have no fear of telling the millions of drivers who work “for” them that they will be rid of them some day. Uber driver is an unusual job, and nobody thinks of it as a career, so they can get away with this.

Trolley problem gets scarier

Academic ethicists, when defending discussions of the Trolley Problem claim that while they understand the problems are not real, they are still valuable teaching tools to examine real questions.

The problem is the public doesn’t understand this, and is morbidly fascinated beyond all rationality with the idea of machines deciding who lives or dies. This has led Barack Obama to ask about it in his first statement on robocars, and many other declarations that we must figure out this nonsense question before we deploy robocars on the road. The now-revoked proposed NHTSA guidelines of 2016 included a theoretically voluntary requirement that vendors outline their solutions to this “problem.”

This almost got more real last week when a proposed UK bill would have demanded trolley solutions. The bill was amended at the last minute, and a bullet dodged that would have delayed the deployment of life saving technology while trying to resolve truly academic questions.

It is time for ethical ethicists to renounce the trolley problem. Even if, inside, they still think it’s got value, that value is far outweighed by the irrational fears and actions it triggers in public debate. Real people are dying every day on the roads, and we should not delay saving them to figure out how to do the “right” thing in hypothetical situations that are actually extremely rare to nonexistent. Figuring out the right thing is the wrong thing. Save solving trolley problems for version 4, and get to work on version 0.2.

There is real ethical work to be done, covering situations that happen every day. Real world safety tradeoffs and their morality. Driving on roads where breaking the vehicle code is the norm. Contrasting cost with safety. These are the places where ethical expertise can be valuable.

Simulators take off

For a long time I have promoted the idea of an open source simulator. Now two projects are underway.

The first is the project Apollo simulator from Baidu and a new entrant called Carla is also in the game.

This is good to see, but I hope the two simulators also work together. One real strength of an open platform simulator is that people all around the world can contribute scenarios to it, and then every car developer can test their system in those scenarios. We want every car tested in every scenario that anybody can think of.

Waymo has developed its own simulator, and fed it with every strange thing their cars have encountered in 5M kilometers of real world driving. It’s one of the things that gives them an edge. They’ve also loaded the simulator with everything their team members can think of. This way, their driving system has the experience of seeing and trying out every odd situation that will be encountered in many lifetimes of human driving, and eventually on every type of road.

That’s great, but no one company can really build it all. This is one of the great things to crowdsource. Let all the small developers, all the academics, and even all the hobbyists build simulations of dangerous scenarios. Let people record and build scenarios for driving in every city of the world, in every situation. No one company can do that but the crowd can. This can give us the confidence that any car has at least at some level encountered far more than any human driver ever could, and handled it well.

Unusual liability rule

Some auto vendors have proposed a liability rule for privately owned robocars that will protect them from some liability. The rule would declare that if you bought a robocar from them, and you didn’t maintain it according to the required maintenance schedule then the car vendor would not be liable for any accident it had.

It’s easy to see why automakers would want this rule. They are scared of liability and anything that can reduce it is a plus for them.

At the same time, this will often not make sense. Just because somebody didn’t change the oil or rotate the tires should not remove liability for a mistake by the driving system that had no relation to those factors.

What’s particularly odd here is that robocars should always be very well maintained. That’s because they will be full of sensors to measure everything that’s going on, and they will also be able to constantly test every system that can be tested.
Consider the brakes, for example. Every time a robocar brakes, it can measure that the braking is happening correctly. It can measure the temperature of the brake discs. It can listen to the sound or detect vibrations. It can even, when unmanned, find itself on an empty street and hit the brakes hard to see what happens.

In other words, unexpected brake failure should be close to impossible (particularly since robocars are being designed with 2 or 3 redundant braking systems.)

More to the point, a robocar will take itself in for service. When your car is not being used, it will run over for an oil change or any other maintenance it needs. You would have to deliberately stop it to prevent it from being maintained to schedule. Certainly no car in a taxi fleet will remain unmaintained except through deliberate negligence.

Robohub Podcast #248: Semi-active Prosthesis, with Peter Adamczyk



In this episode, Audrow Nash interviews Peter Adamczyk, Assistant Professor at the University of Wisconsin Madison, on semi-active foot and ankle prostheses. The difference is that active below-knee prostheses work to move the person’s weight, emulating the calf muscle, while semi-active devices use small amounts of power to improve the performance of the prosthesis. Adamczyk discusses the motivation for semi-active devices and gives three examples: shiftable shapes, controllable keels, and alignable ankles.

Peter Adamczyk

Peter Adamczyk directs the UW Biomechatronics, Assistive Devices, Gait Engineering and Rehabilitation Laboratory (UW BADGER Lab) which aims to enhance physical and functional recovery from orthopedic and neurological injury through advanced robotic devices. We study the mechanisms by which these injuries impair normal motion and coordination, and target interventions to encourage recovery and/or provide biomechanical assistance. Our work primarily addresses impairments affecting walking, running, and standing. One core focus is advanced semi-active foot prostheses for patients with lower limb amputation. Additional research addresses assessment and rehabilitation of balance impairments, hemiparesis, and other neurologically-based mobility challenges.

 

 

Links

European Robotics Week 2017: Live coverage

We hope you’re enjoying the European Robotics Week! If you’re still looking for events to attend over the weekend, make sure to check out the map of 1000 happenings all over Europe.

One highlight was the European Robotics League competition focused on service robotics with teams from Spain, Germany, the United Kingdom and Portugal. The teams had to show how their robots can assist old people in their daily life, all in an attrezzo that simulates a home.

The central event of the week was held in Brussels, and featured a “Robots Discovery” exhibition hosted by the European Committee of the Regions, where robotics experts from 30 European and regionally-funded projects outlined how their work could impact society. Exhibiting projects are listed below.

  • EurEyeCase will design instrumenation and control techniques to improve clinical outcome for a selection of relevant and urgent eye surgery procedures for certain pathologic conditions, affecting over 16 million elderly persons worldwide.
  • MURAB has the ambition to drastically improve precision and effectiveness of the biopsy gathering for cancer diagnostic operations. Through a robotic device which can autonomously scan the target area and optimally acquire data, the use of expensive Magnetic Resonance Imaging (MRI) will be reduced to a minimum.
  • SoftPro project will study and design soft synergy-based robotics technologies to develop new prostheses, exoskeletons, and assistive devices for upper limb rehabilitation, which will greatly enhance the efficacy and accessibility for a greater number of users. SoftHand Pro: a prosthetic hand that is robust, versatile, usable, strong, and delicate.
  • IoT and robotic technologies are two complementary domains with large potential for improving our daily life quality. The two showcased projects are: imec.WONDER, where a Nao robot engages in personalized interactions with people suffering from dementia, tracking behavioral disturbances by means of environmental sensors; and imec.ROBOCURE, where social robots are interfacing with networked glucose meters for improved diabetes education and follow-up therapy at home.
  • BabyRobot’s ambition is to create robots that can establish communication protocols, form collaboration plans on the fly, and create an impact beyond the consumer and healthcare application markets. BabyRobot focuses on special education for children with autism. 
  • The Vrije Universiteit Brussel is involved with many different robotics initiatives ranging from local projects (Brubotics/VUB Exoskeleton), spinoffs (Axiles), and international collaborations (CYBERLEGs). These projects are focused on assistive technologies and human-robot interactions, developing new exoskeleton technologies, commercializing new prosthetic devices such as the Axiles ankle prosthesis, and developing new powered prosthetic devices to assist those who may not be able to use current designs.
  • Early diagnosis with a non-invasive and painless endoscopic technique to eradicate colorectal cancer? Yes, a new solutions exists: the Endoo medical platform. The Endoo European Project aims to develop an active colonoscopic platform for robotic guidance of a painless, innovative, smart, and soft-tethered device, in order to achieve accurate and reliable diagnosis and therapy of colonic pathologies, with high acceptance by patients for preventive mass screening.
  • The Educational Robotics for STEM (ER4STEM) project aims to turn curious children into young adults passionate about science and technology through a hands-on platform using robotics The project’s research is aimed at developing an open operational and conceptual framework that involves pedagogical methods as well as technologies and tools for educational robotics, including a web repository of educational robotics in Europe.
  • The European Robotics League (ERL), a novel model for competitions funded by the European Commission, brings a common framework for three robotics challenges: ERL Industrial Robots, ERL Service Robots and ERL Emergency Robots, allowing teams to test their robots’ ability to face real-world situations. The ERL local and major tournaments are based in Europe and are open to international participation. European cities can apply to host an ERL tournament.
  • LUVMI is a small, lightweight rover being designed to explore polar regions of the Moon and drive into a Permanently Shadowed Region (PSR), believed to hold vast stores of water. Instruments carried by the rover will look specifically for this water which may be potentially game-changing for future manned missions to the moon.
  • Makeathons and hackathons are excellent tools to foster collaborations and co-creation in a world of complex and disruptive solutions. To connect the possibilities and the need for robots and artificial intelligence between companies, startups and academic groups, the InQbet makeathon takes place simultaneously in Brussels, New York and Singapore. More than 100 attended the kick-off, and 50 experts members are developing solutions with startups.
  • Using robotics to teach children about programming and other digital skills, improves motivation, makes programming tangible and naturally links together different topics in science and engineering. Dwengo has developed several tools and teaching materials to be used during classroom activities. Moreover, international projects such as WeGoSTEM and Udavi brought robot education to socio-disadvantaged children worldwide!
  • In the IDLab at UGent – imec has developed multiple quadruped robots over the past decade. By building and programming quadruped robots, one can really understand the underlying principles of movement and cognition.
  • Asbestos materials were used in many installations, flats, and offices in the past. Even though their hazardous effects to the human health are well known, the material is still present in many buildings. The Bots2ReC Project aims at the development of a robotic system for the efficient automated removal of asbestos contamination, without putting human workers at risk.
  • The CoCoRo project aimed at creating a swarm of interacting, cognitive, autonomous robots. The swarm of autonomous underwater vehicles (AUVs) are able to interact with each other in order to achieve environmental monitoring, search and exploration of underwater habitats.
  • DexROV develops technologies for executing sub-sea dexterous interventions (maintenance of infrastructures, geology, biology, archaeology) with underwater robots (ROVs) from a remote control center, through a satellite communication link. The remote control center is featured with a double arm, and double hands allowing the pilot to instruct dexterous operations. DexROV will be demonstrated in 2018 at 1,000 meters deep in the Mediterranean sea, while being operated from Zaventem, in Belgium.
  • The BADGER autonomous underground robotic system will be able to drill, maneuver, localise, map and navigate in the underground space, and will be equipped with tools for constructing horizontal and vertical networks of stable bores and pipelines. The proposed robotic system will operate in domains of high societal and economic impact including trenchless constructions, cabling and pipe installations, geotechnical investigations, large-scale irrigation installations, search and rescue operations, remote science and exploration applications.
  • SAGA is an ECHORD++ experiment. The goal of the project is to prove the applicability of swarm robotics to precision farming. 
  • The ICARUS project proposes a comprehensive and integrated set of unmanned search and rescue tools which consist of assistive unmanned air, ground, and sea vehicles, equipped with victim-detection sensors. The unmanned vehicles collaborate as a coordinated team, communicating via ad hoc cognitive radio networking.
  • The H2020-SafeShore project, has as a main goal to cover existing gaps in coastal border surveillance, increasing internal security by preventing cross-border crime such as trafficking in human beings and the smuggling of drugs. It is designed to be integrated with existing systems and create a continuous detection line along the border.
  • The TIRAMISU project aims at providing the foundation for a global toolbox that will cover the main mine action activities, from the survey of large areas to the actual disposal of explosive hazards, including mine risk education and training tools.
  • The goal of SHERPA is to develop a mixed ground and aerial robotic platform to support search and rescue activities in a real-world hostile environment like the alpine scenario.
  • Disaster response and other tasks in dangerous and dirty environments can put human operators at risk. The ECHORD++ HyQ-REAL experiment will bring to the real world IIT’s four-legged robot, capable of a wide repertoire of indoor/outdoor motions ranging from running and jumping to carefully walking over rough terrain.
  • Co4Robots is a European-wide collaboration between industry and academia that aims to build a systemic, integrated methodology with which to accomplish complex tasks given to a group of robots in various environments such as a hotel, an office, a hospital, or a warehouse.

For more news, follow #ERW2017 on twitter or below.

30+2 research reports forecast significant growth for robot industry

Press releases for this batch of 30 research reports all agree that most segments of the robotics industry are expected to grow at a double-digit pace at least through 2022.

Although these reports vary widely in their forecasts – often on the same topic, they all seem to agree that the global robotics industry is growing at a compound annual growth rate (CAGR) in the teens or greater.

Unmanned mobile air, land and sea vehicles (commercial and military)

  • Commercial UAV Report
    Aug 2017, Interact Analysis, free
    Industry revenues for commercial-use drones are forecast to reach $15 billion by 2022, up from just $1.3 billion in 2016. This includes revenues from hardware, software/analytics and drone services. Rapidly increasing penetration rates into a huge number of commercial applications are driving a six-fold increase in drone shipments, surpassing 620,000 units in 2022. Only the trend of using drone service providers rather than purchasing hardware will temper this growth.
  • Global driverless tractors market
    Nov 2017, 109 pages, QY Research, $3,500
    Describes offerings from John Deere, Autonomous Tractor, AGCO/Fendt and CNH/Cash IH.
  • Nov 2017, 127 pages, Tractica, $4,200
    Tractica forecasts that worldwide shipments of enterprise robots will grow from approximately 83,000 units in 2016 to 1.2 million units in 2022, increasing at a compound annual growth rate (CAGR) of 57% during that period.  Worldwide revenue for the enterprise robotics market will increase from $5.9 billion in 2016 to $67.9 billion in 2022.
  • Global indoor robots market
    Oct 2017, 223 pages, BIS Research, $4,499
    The global indoor robots market, which consists of cleaning, medical, security & surveillance, public relations, education, entertainment, and personal assistant robots, generated $3.7 billion in 2016 and has exhibited a high growth rate.
  • Global defense counter-UAS technologies
    Oct 2017, Frost & Sullivan, $1,500
    Over 50 global defense companies now offer some sort of counter unmanned aerial systems (C-UAS).
  • Sep 2017, 186 pages, Reports n Reports, $5,650
    The military robots market is expected to grow from an estimated $16.79 billion in 2017 to $30.83 billion by 2022, at a CAGR of 12.92%. Drivers for military robots include rising number of terrorist activities, increasing need of systems that can conduct remote operations for a longer time, and technological developments in unmanned systems. Mine clearance is expected to witness the highest growth during the forecast period.
  • Sep 2017, 101 pages, Absolute Reports, $4,000
    The Global Explosive Ordnance Disposal (EOD) Robot market is valued at $5.98 billion in 2016 and is expected to reach $8 billion by the end of 2022, growing at an annual CAGR of 4.6%.
  • Autonomous underwater vehicle market
    Aug 2017, Markets and Markets, $5,650
    The market for autonomous underwater vehicle (AUV) is expected to grow from $362.5 million in 2017 to $1,206.9 million by 2023, at a CAGR of 22.20% between 2017 and 2023.

Industrial, collaborative and sensors

  • July 2017, IDC, subscription service
    IDC forecasts worldwide purchases of robotics, including drones and robotics-related hardware, software and services, will total $97.2 billion in 2017, an increase of 17.9% over 2016. IDC expects robotics spending to accelerate over the next five years reaching $230.7 billion in 2021 with a compound annual growth rate (CAGR) of 22.8%.
  • Nov 2017, Energias Market Research, $4,895
    The Global Collaborative Robots market is expected to increase from $177.2 million in 2016, to $4,238.3 million in 2023, at a significant CAGR of 57.4% from 2017 to 2023. Increasing investments in automation by industries to support industry 4.0 revolution (smart production), low price of collaborative robots and high return on investment (ROI) rates are the factors attributing towards the growth of the global collaborative market during the forecast period.
  • Dec 2017, 350 pages, Data Bridge Market Research, $4,200
    The Global Industrial Robots Market accounted to $38.20 billion in 2016 growing at a CAGR of 9.54% during the forecast period of 2017 to 2024. The upcoming market report contains data for historic years 2015, the base year of calculation is 2016 and the forecast period is 2017 to 2024.
  • Global industrial and service robots market
    Nov 2017, 125 pages, QY Research, $3,560
    No forecasts available for this report.
  • Oct 2017, 87 pages, TechNavio, $2,500
    TechNavio forecasts that the market will grow steadily at a CAGR of around 12% through 2021.
  • Jul 2017, 81 pages, TechNavio, $2,500
    TechNavio forecasts the global industrial robotics rental market to grow at a CAGR of 13.58% during the period 2017-2021.
  • Oct 2017, 114 pages, Variant Market Research, $3,746
    Variant forecasts this market to reach $77.7 billion by 2024 growing at an annual CAGR of 9.3% from 2017 to 2024.
  • Mar 2017, 70 pages, TechNavio, $3,500
    For blind robots to pick an object those objects must be properly positioned – a niche industry that is forecast to grow at a 7% CAGR.
  • Nov 2017, 104 pages, QY Research, $3,500
    No forecasts available for this report.
  • Global collaborative robots market
    2017, Inkwood Research, $2,500
    Global Collaborative Robots market is expected to grow at 49.14% CAGR during the forecast period 2017-2025; North America collaborative robots market was valued at $74 million in 2016 and is estimated to generate a net revenue of approximately $1592 million by 2025, growing at a CAGR of 40.93%.
  • Oct 2017, 120 pages, ReportLinker, $4,795
    Forecasts the global packaging robot market to grow at a CAGR of 13.9% from 2017 to 2023.
  • Oct 2017, GMI Research, $4,786
    The market for collaborative robots is expected to grow at a CAGR of 56.6% from 2017 to 2023. Drivers are towards automation as well as growing demand for compact, lightweight and dexterous robots along with low average selling price and higher returns by investing on collaborative robots.
  • Sep 2017, 108 pages, QY Research, $4,000
    The report reviews the major drive providers (Nabtesco, Harmonic Drive, Sumitomo) and four new Chinese providers as well (an important factor since there is a major backlog in harmonic drive production and much of the demand is for robots in China).
  • Oct 2017, Frost & Sullivan, $6,950

    Low power, smaller, lighter sensors with enhanced performance attributes and minimal false alarms is driving innovations in the sensors space for safety systems, wearables, drones, radar and intrusion detection.

Professional, agricultural, commercial and consumer service robots

  • Sep 2017, Energias Market Research, $4,895
    The global Agriculture Robot market is expected to increase from $1.03 billion in 2016, to $4.7 billion in 2023, at a CAGR of 24.31% from 2017 to 2023. The overall Agriculture Robots market is mainly driven by the focus on technological innovations such as precision farming to enhance the yield of crops.
  • Sep 2017, 127 pages, Market Insights Reports, $2,900
    Europe was the largest production market with a market share of 48.63% in 2016, it is also the biggest consumption market with a market share of 59.44% in 2016. North America ranked the second markets with the production market share of 33.28% in 2016 and with the consumption share of 32.52% in 2016.
  • Oct 2017, Transparency Market Research, $5,950
    The global commercial robotics market is set to rise to $17.6 billion by 2022 at a CAGR of 24.4% beginning at $5.9 billion by the end of 2017, 40% of which is medical robotics.
  • Sep 2017, 205 pages, Allied Market Research, $3,840
    The global agricultural robots market is estimated to account for a market revenue of $2,927 million in 2016 and is expected to reach to $11,050 million in 2023.
  • Oct 2017, 203 pages, Meticulous Market Research, $4,175
    Global Food Robotics Market is expected to reach $2.2 billion by 2022 supported by a CAGR of 12.5% during the forecast period of 2017 to 2022. Drivers include lack of skilled workforce, increasing food safety regulations, rising demand for advanced food packaging and growing demand to improve productivity.
  • Oct 2017, 241 pages, Berg Insight AB, $1,890
    Ten major segments hold great market potential for next decade: floor cleaning robots, robot lawn mowers, milking robots, telepresence robots, surgical robots, automated guided vehicles, autonomous mobile robots, unmanned aerial vehicles and humanoid, assistant and social companion robots. The installed base of service robots in these segments reached 29.6 million worldwide at the end of 2016.
  • Humanoid Robot Market
    Oct 2017, 133 pages, ReportsnReports, $5,650
    The humanoid robot market is expected to reach $3.9 billion by 2023 from $320.3 million in 2017, at a CAGR of 52.1% between 2017 and 2023. This growth can be attributed to the introduction of advanced features in humanoid robots, the increasing use of humanoids as educational robots, and growing demand from the retail industry for personal assistance.
  • Nov 2017, 126 pages, QY Research, $4,000
    Covers top manufacturers Softbank, Robotis, Hanson, Ubtech, Hasbro, Wowwee, Qihan and basic uses for this type of robot in education, entertainment, space, R&D, personal assistance, caregiving, search & rescue and PR.

Two International Federation of Robotics Annual Reports

The fact-based backbone for many of the research reports shown above are the International Federation of Robotics’ (IFR) annual World Robotics Industrial Robots and World Robotics Service Robots reports. These two books represent the official tabulation and analysis from all the robot associations around the world and cover all aspects of industrial and service robotics.The 2017 reports cover 2016 activity.

Industrial Robots: By 2020 the IFR estimates that more than 1.7 million new industrial robots will be installed in factories worldwide. In 2017 robot installations are estimated to increase by 21% in the Asia-Australia region. Robot supplies in the Americas will surge by 16% and in Europe by 8%.

Service Robots: The IFR estimated that sales of all types of robots for domestic tasks – e.g. vacuum cleaning, lawnmowing, window cleaning – could reach almost 32 million units in the period 2018-2020, with an estimated value of about $11.7 billion. At the same time total unit sales of professional service robots are estimated to reach a total of almost $18.8 billion – about 400,000 units will be sold.

The two reports can be purchased from the IFR for $2,100 (€1800 + VAT where applicable). The reports can also be purchased separately: the industrial report in pdf format costs $1,400 (€1200)​ and the service report $700 (€600).

Locus Robotics raises $25 million for warehouse RaaS

Locus Robotics, a Wilmington, MA-based startup, raised $25 million in a Series B funding led by Silicon Valley Scale Venture Partners, with additional participation from existing investors. Locus plans to use the funds to expand into international markets and build up its growing subscription-based robot fleet. Locus business model uses Robots-as-a-Service (RaaS) which allows customers to use Locus’ solutions without a large-scale capital investment.

The story of how Locus came to be is almost as interesting as why their mobile robots and RaaS business mode are getting so much attention and acceptance.

In March 2012, in an effort to make their distribution centers (DCs) as efficient as possible, Amazon acquired Kiva Systems for $775 million and almost immediately took them in-house. There was a year of confusion after the acquisition whether Kiva would continue providing DCs with Kiva robots. It became clear that Amazon was taking all Kiva’s production and that, at some future date, Kiva would stop supporting their existing client base and focus entirely on Amazon – which happened in April 2015 when Amazon renamed Kiva to Amazon Robotics and encouraged prospective users of Kiva technology to let Amazon Robotics and Amazon Services provide fulfillment within Amazon warehouses using Amazon robots.

Locus Robotics came to be because its founders were early adopters of Kiva Systems robotics technology. When they couldn’t expand with Kiva because Kiva had been taken off the market by Amazon, they were inspired to engineer a system they thought better and which empowered human pickers with mobile robots. The Locus mobile robot and related software are their solution.

Page 4 of 26
1 2 3 4 5 6 26