MIT Develops Autonomous “Socially Aware” Robot Using Jackal UGV
New technique eases production, customization of soft robotics
A round up of robotics and AI ethics: part 1 principles
This blogpost is a round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. The principles are presented here (in full or abridged) with notes and references but without commentary. If there are any (prominent) ones I’ve missed please let me know.
Asimov’s three laws of Robotics (1950)
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov’s short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.
Murphy and Wood’s three laws of Responsible Robotics (2009)
- A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
- A robot must respond to humans as appropriate for their roles.
- A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.
These were proposed in Robin Murphy and David Wood’s paper Beyond Asimov: The Three Laws of Responsible Robotics [2].
EPSRC Principles of Robotics (2010)
- Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
- Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy.
- Robots are products. They should be designed using processes which assure their safety and security.
- Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
- The person with legal responsibility for a robot should be attributed.
These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.
Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)
I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:
6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
An account of the development of the Asilomar principles can be found here.
The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
- Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
- Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
- Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
- Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
- Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
- Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
- Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results.
See the ACM announcement of these principles here. The principles form part of the ACM’s updated code of ethics.
Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)>
- Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity.
- Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
- Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
- Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI.
- Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control.
- Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society.
- Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed.
- Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
- Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.
An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).
Draft principles of The Future Society’s Science, Law and Society Initiative (Oct 2017)
- AI should advance the well-being of humanity, its societies, and its natural environment.
- AI should be transparent.
- Manufacturers and operators of AI should be accountable.
- AI’s effectiveness should be measurable in the real-world applications for which it is intended.
- Operators of AI systems should have appropriate competencies.
- The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.
This article by Nicolas Economou explains the 6 principles with a full commentary on each one.
Montréal Declaration for Responsible AI draft principles (Nov 2017)
- Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
- Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
- Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
- Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
- Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
- Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
- Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.
The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).
IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)
- How can we ensure that A/IS do not infringe human rights?
- Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being.
- How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable?
- How can we ensure that A/IS are transparent?
- How can we extend the benefits and minimize the risks of AI/AS technology being misused?
These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.
A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.
UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)
- Demand That AI Systems Are Transparent
- Equip AI Systems With an “Ethical Black Box”
- Make AI Serve People and Planet
- Adopt a Human-In-Command Approach
- Ensure a Genderless, Unbiased AI
- Share the Benefits of AI Systems
- Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
- Establish Global Governance Mechanisms
- Ban the Attribution of Responsibility to Robots
- Ban AI Arms Race
Drafted by UNI Global Union‘s Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.
References
[1] Asimov, Isaac (1950): Runaround, in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).
This New Kind of Robot Can Adapt to Physical Damage
Indoor drone shows are here
2017 was the year where indoor drone shows came into their own. Verity Studios’ Lucie drones alone completed more than 20,000 autonomous flights. A Synthetic Swarm of 99 Lucie micro drones started touring with Metallica (the tour is ongoing and was just announced the 5th highest grossing tour worldwide for 2017). Micro drones are now performing at Madison Square Garden as part of each New York Knicks home game — the first resident drone show in a full-scale arena setting. Since early 2017, a drone swarm has been performing weekly on a first cruise ship. And micro drones performed thousands of flights at Changi Airport Singapore as part of its 2017 Christmas show.
Technologically, indoor drone show systems are challenging. They are among the most sophisticated automation systems in existence, with dozens of autonomous robotic aircraft operating in a safety-critical environment. Indoor drone shows require sophisticated, distributed system control and communications architectures to split up and recombine sensing and computation between aircraft and their off-board infrastructure. Core challenges are not unlike those found in modern systems for manned aviation (e.g., combining auto-pilots, GPS, and air traffic control) and in creating tomorrow’s smart cities (e.g., combining semi-autonomous cars with intelligent traffic lights in a city).
These technological challenges are compounded by another: At least for permanent show installations, these systems need to be operated by non-experts. Two years ago, in one of the first major indoor drone shows, a swarm of micro drones flew over the audience at TED 2016. That system was operated by Verity Studios’ expert engineers. Creating a system that is easy enough to use, and reliable enough, to be operated by show staff is a huge technical challenge of its own. All of Verity’s 2017 shows mentioned above were fully client-operated, which speaks to the maturity that Verity’s drone show system has achieved.
For my colleagues and me, it is these technological challenges, together with the visual impact of indoor drone shows, that makes these systems so much fun and hugely rewarding to work with.
Creative potential
Creatively, the capabilities of today’s indoor drone show systems barely scratch the surface of the technology’s potential. For centuries, show designers were restricted to static scenes. Curtains were required to hide scene changes from the audience, lest stage hands rushing to move set pieces destroy the magic created by a live show. The introduction of automation to seamlessly move backdrops and other stage elements, followed by the debut of automated lighting to smoothly pan and tilt traditional, stationary illumination were revolutionary.
Drones hold the potential for pushing automation further. The Lucies shown in the images above give a first inkling of the creative potential of flying lights that can be freely positioned in 3D space, appearing at will. Larger drones allow to extend that concept to nearly any object, including the creation of flying characters.
Safety
The most critical challenge for indoor drone show systems is safety. Indoor drone shows feature dozens of drones flying simultaneously and in tight formations, close to crowds of people, in a repeated fashion, in the high-pressure environment of a live show. For example, as part of the currently running New York Knicks drone show, 32 drones perform above 16 dancers, live in front of up to 20,000 people in New York’s Madison Square Garden arena, 44 times per season.
There are really only three ways to safely fly drones at live events.
The first way to achieve safety is the same that keeps commercial aviation safe: System redundancy. Using this approach, Verity Studios’ larger, Stage Flyer drones performed safely on Broadway, completing 398 shows and more than 7,000 autonomous flights, flying 8 times a week, in front of up to 2,000 people for a year, without safety nets. The Stage Flyer drones are designed around redundancy. At least two of each components are used (e.g., two batteries, two flight computers, and a duplicate of each sensor) or existing redundancies are exploited. For example, the Stage Flyer drones have only four propellers and motors, like any quadcopter. However, advanced algorithms that exploit the physics of flight allow these multi-rotor vehicles to fly with less than 4 propellers. The overall design allows these drones to continue to fly in spite of any individual component failure. For example, in one of the last Broadway shows, a Stage Flyer experienced a battery failure. The drone switched into its safety flight mode and landed, and the show continued with 7 instead of 8 drones. This approach to drone safety remains highly unusual — all drones available for purchase today have single points of failure.
The second approach to safety is physical separation. This is how safety is usually achieved for outdoor drone shows: Drones perform over a body of water or some roads are temporarily closed to create a large-enough area without people. For example, the Intel drone show at the Super Bowl was recorded far away from the NRG stadium. In fact, for the Super Bowl, safety went even a step further, also adding “temporal separation” to the physical separation (the drone show was actually pre-recorded days ahead of time, and viewers in the stadium and on TV were only shown a video recording). For indoor drone lightshows, physical separation can be achieved using safety nets.
The third approach to safely flying drones at live events is to make the drones so small that they have high inherent safety. Verity Studios’ Lucie micro drones weigh less than 1.8 ounces or 50 grams (including their flexible hull).
As the continuing string of safety incidents involving drones at live events attests, not everyone takes drone safety seriously. This is why my colleagues and I have worked with aviation experts and leading creatives to summarize best practices in an overview paper: Drone shows – Creative potential and best practices.
So, what’s in store for 2018? The appetite for indoor drone shows is huge, which is why Verity Studios is growing its team. And given the 2017 track record, there is a lot to look forward to — your favorite venue’s ceiling is the limit!
CANADA: Deloitte future jobs report recommends basic income
#251: Open Source Prosthetic Leg, with Elliott Rouse
In this episode, Audrow Nash interviews Elliott Rouse, Assistant Professor at the University of Michigan, about an open-source prosthetic leg—that is a robotic knee and ankle. Rouse’s goal is to provide an inexpensive and capable platform for researchers to use so that they can work on prostheses without developing their own hardware, which is both time-consuming and expensive. Rouse discusses the design of the leg, the software interface, and the project’s timeline.
Elliott Rouse
Elliott Rouse is an Assistant Professor in the Mechanical Engineering Department at the University of Michigan, where he directs the Neurobionics Lab. The vision of his group is to discover the fundamental science that underlies human joint dynamics during locomotion and incorporate these discoveries in a new class of wearable robotic technologies. The Lab uses technical tools from mechanical and biomedical engineering applied to the complex challenges of human augmentation, physical medicine, rehabilitation and neuroscience. Dr. Rouse and his research have been featured at TED, on the Discovery Channel, CNN, National Public Radio, Wired Magazine UK, Business Insider, and Odyssey Magazine.
Links
How increasing regulation will unleash drone-based potential
Top Robocar news of 2017
Here are the biggest Robocar stories of 2017.
Waymo starts pilot with no safety driver behind the wheel
By far, the biggest milestone of 2017 was the announcement by Waymo of their Phoenix Pilot which will feature cars with no safety driver behind the wheel, and the hints at making this pilot open to the public.
The huge deal is that Waymo’s lawyers and top executives signed off on the risk of running cars with no safety driver to take over in emergencies. There is still an employee in the back who can do an emergency shutdown but they can’t grab the traditional controls. A common mistake in coverage of robocars is to not understand that it’s “easy” to make a car that can do a demo, but vastly harder to make one that has this level of reliability. That Waymo is declaring this level puts them very, very far ahead of other teams.
Many new LIDAR and other sensor companies enter the market
The key sensor for the first several years of robocars will almost surely be LIDAR. At some point in the future, vision may get good enough but that date is quite uncertain. Cost is not a big issue for the first few years, safety is. So almost everybody is gearing up to use LIDAR, and many big companies and startups have announced new LIDAR sensors and lower prices.
News includes Quanergy (I am an advisor) going into production on a $250 8-line solid state unit, several other similar units in development from many companies, and several new technologies including 1.5 micron LIDARs from Luminar and Princeton Lightwave, 128 plane LIDARs from Velodyne and radical alternate technologies from Oryx in Israel and others. In addition, several big players have acquired LIDAR companies, indicating they feel it is an important competitive advantage.
At the same time, Waymo (which created its own special long range LIDAR) has been involved in a giant lawsuit against Uber, alleging that the Otto team appropriated Waymo secrets to build their own.
Here is some coverage I had on LIDAR deals.
In more recent news, today Velodyne cut the price of their 16 laser puck to $4,000. 16 planes is on the low side as a solo sensor but this price is quite reasonable for anybody building a taxi.
Regulations get reversed.
In 2016 NHTSA published 116 pages of robocar regulations. Under the new administration, they reversed this and published some surprisingly light handed replacements. States have also been promoting local operations, with Arizona coming out as one of the new winners.
Intel buys MobilEye
There were many big acquisitions with huge numbers, including NuTonomy (by Delphi) but the biggest ever deal was the $16B purchase of MobilEye by Intel.
MobilEye of course has a large business in the ADAS world but Intel wants the self-driving car part and paid a multi billion dollar premium for it.
Uber orders 24,000 Volvos
It’s not a real order quite yet but this intent to buy $1B of cars to put Uber software on shows how serious things are getting, and should remove from people’s minds the idea that Uber doesn’t intend to own a fleet.
Flying cars get a tiny bit more real
They aren’t here yet, but there’s a lot more action on Flying Cars, or in particular, multirotor drone-style vehicles able to carry a person. It looks like these are going to happen, and they are the other big change in the works for personal transportation. It remains uncertain if society will tolerate noisy helicopters filling the skies over our cities, but they certainly will be used for police, ambulance, fire and other such purposes, as well as over water and out in the country.
A little more uncertain is the Hyperloop. While the science seems to work, the real question is one of engineering and cost. Can you actually do evacuated tubes reliably and at a cost that works?
December 2017 fundings, acquisitions and IPOs
Twenty-one different startups were funded in December cumulatively raising $430 million, down from the $782 million in November. Three didn’t report the amount of their funding. Only three were over $50 million of which one was a Chinese startup. Three acquisitions were reported during the month including two takeovers of Western robotics companies by Chinese ones. Nothing new on the IPO front.
Fundings:
- Farmers Business Network, a San Carlos, Calif.-based farmer-to-farmer network raised $110 million in Series D funding. T. Rowe Price Associates Inc and Temasek led the round, and were joined by investors includingAcre Venture Partners, Kleiner Perkins Caufield & Byers, GV and DBL Partners.
- Ripcord, a Hayward, CA robotic digitization company, raised $59.5 million this year in a March Series A and Aug/Dec Series B equity funding led by GV and Icon Ventures with Lux Capital, Telstra Ventures, Silicon Valley Bank, Kleiner Perkins, Google and Baidu Ventures. Ripcord has developed and is providing as a service a digitization service using AI, scanning and robotics to go from cardboard storage boxes full of tagged manila folders, to scanable pdf files available through ERP and other office systems.
- JingChi, a Chinese-funded Beijing and Silicon Valley self-driving AI systems startup, raised $52 million (in September) in a seed round led by Qiming Venture Partners. China Growth Capital, Nvidia GPU Ventures and other unnamed investors also participated in the round. Baidu is suing former Baidu employee Wang Jing for using Baidu IP for his new startup.
- Groove X, a Tokyo startup developing a humanoid robot Lovot, raised $38.7 million in a Series A round by Mirai Creation Fund, government-backed Innovation Network Corporation of Japan (INCJ), Shenzhen Capital Group, Line Ventures, Dai-ichi Seiko, Global Catalyst Partners Japan (GCPJ), Taiwan’s Amtran Technology, OSG and SMBC Venture Capital. Groove X has raised $71.1 million thus far.
- Kespry, a Menlo Park, Calif.-based aerial intelligence solution provider, raised $33 million in Series C funding led by G2VP, and was joined by investors including Shell Technology Ventures, Cisco Investments, and ABB Ventures.
- Ouster, a San Francisco startup developing a $12,000 LiDAR, raised $27 million in a Series A funding round led by Cox Enterprises with participation from Fontinalis, Amity Ventures, Constellation Technology Ventures, Tao Capital Partners, and Carthona Capital.
- Fetch Robotics, a Silicon Valley logistics co-bot maker, raised $25 million in a Series B round led by Sway Ventures in San Francisco and included existing investors O’Reilly AlphaTech Ventures, Shasta Ventures and SoftBank’s SB Group US. The round brings total funding to $48 million. Fetch, in addition to warehousing customers, is selling to “Tier 1” automakers, which like the ability of Fetch’s robots to detect and track the location of parts “to avoid losing transmissions.
- Virtual Incision, a spinout from the U of Nebraska, is a medical device company developing miniaturized robotically assisted general surgery device, raised $18 million in a Series B funding round co-led by Chinese Sinopharm Capita and existing investor Bluestem Capital, with participation from PrairieGold Venture Partners and others.
- Wuhan Cobot Technology, a Chinese co-bot startup, raised $15.4 million in a Series B round led by Lan Fund with participation by Matrix Partners and GGV Capital.
- PerceptIn, a Silicon Valley vision systems startup, raised $11 million in Angel and A round funding, from Samsung Ventures, Matrix Partners and Walden Intl. Perceptin also announced their new $399 Ironsides product, a full robotics vision system combining both hardware and software for realtime tracking, mapping and path planning.
- Upstream Security, a San Francisco-based cybersecurity platform provider for connected cars and self-driving vehicles, raised $9 million in Series A funding. Charles River Ventures led the round and was joined by investors including Glilot Capital Partners and Maniv Mobility.
- Robocath, a French medical robotics device developer, raised $8.6 million in two funding rounds from from Crédit Agricole Innovations et Territoires (CAIT), an innovation fund managed by Supernova Invest. Cardio Participation also invested. Robocath raised $5.6 million in May led by Normandie Participation, M Capital Partners with participation by NCI Gestion and GO Capital and $3 million in December.
- FarmWise, a San Francisco agricultural robotics and IoT startup developing a weeding robot, raised $5.7 million in a seed round led by hardware-focused VC Playground Global with Felicis Ventures, Basis Set Ventures, and Valley Oak Investments also participating.
- Guardian Optical Technologies, an Israeli sensor maker, raised $5.1 million in Series A funding from Maniv Mobility and Mirai Creation Fund.
- Aeronyde, a Melbourne, Fla,-based drone infrastructure firm, raised $4.7 million led by JASTech Co. Ltd.
- Elroy Air, a San Francisco startup building autonomous aircraft systems to deliver goods to the world’s most remote places, raised $4.6 million in a seed round led by Levitate Capital with participation by Shasta Ventures, Lemnos Labs and Homebrew.
- Tortuga AgTech, a Denver-based robotics startup targeting controlled-environment fruit and vegetable growers has raised a $2.4 million Seed round led by early-stage hardware VC Root Ventures and closed in September. Also participating in the round were Silicon Valley tech VCs Susa Ventures and Haystack, AME Cloud Ventures, Grit Labs, the Stanford-StartX Fund and SVG Partners. Tortuga is developing robotic systems for harvesting fresh produce in controlled environments, from indoor hydroponics to greenhouses, starting with strawberries.
- FluroSat, an Australian crop health startup, has raised $770k in a seed round by Main Sequence Ventures, manager of the Australian government’s $100 million CSIRO Innovation Fund, Airtree Ventures and Australia’s Cotton Research and Development Corporation (CRDC).
- Blue Frog Robotics, a Paris-based robotics startup, raised funding of an undisclosed amount. Fenox Venture Capital led the round with Gilles Benhamou and Benoit de Maulmin participating.
- SkyX, an Israeli agtech developing variable aerial spraying software methods, raised an undisclosed seed funding round by Rimonim Fund.
- TetraVue, a Vista, Calif.-based 3D technology provider, raised funding of an undisclosed amount. Investors include KLA Tencor, Lam Research, Tsing Capital, Robert Bosch Venture Capital GmbH, Samsung Catalyst Fund and Nautilus Ventures.
Acquisitions:
- Chinese medtech investment firm Great Belief International has acquired the IP and assets of the SurgiBot surgical robot developed by TransEnterix for $29 million. TransEnterix retains distribution rights outside of China. SurgiBot failed an FDA application whereupon TransEnterix acquired an Italian competitor with a less advanced product. TransEnterix will continue to develop and market that Senhance robotic assisted surgery platform – which has both CE and FDA approvals. “The relationship with GBIL will allow us to advance the SurgiBot System toward global commercialization while significantly reducing our required investment and simultaneously leveraging ‘in-country’ manufacturing in the world’s most populous country,” TransEnterix president & CEO Todd Pope said.
- Estun Automation, a Chinese industrial robot and CNC manufacturer, acquired 50.01% (with the right to acquire the remainder of the shares) of Germany-based M.A.i GmbH, a 270 person integrator with facilities in the US, Italy, Romania and Germany, for around $10.5 million.
- Rockwell Automation (NYSE:ROK) acquires Odos Imaging for an undisclosed amount. Odos develops 3D imaging technologies for manufacturing systems.
Robots in Depth with Daniel Lofaro
In this episode of Robots in Depth, Per Sjöborg speaks with Daniel Lofaro, Assistant Professor at George Mason University specialising in humanoid robots.
Daniel talks about making humans and robots collaborate through co-robotics, and the need for lower-cost systems and better AI. He also mentions that robotics needs a “killer app”, something that makes it compelling enough for the customer to take the step of welcoming a robot into the business or home. Finally, Daniel discusses creating an ecosystem of robots and apps, and how competitions can help do this.
This Robot Can Mix Drinks in Your Hotel Room
Physical adversarial examples against deep neural networks
By Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, and Bo Li based on recent research by Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, Dawn Song, and Florian Tramèr.
Deep neural networks (DNNs) have enabled great progress in a variety of application areas, including image processing, text analysis, and speech recognition. DNNs are also being incorporated as an important component in many cyber-physical systems. For instance, the vision system of a self-driving car can take advantage of DNNs to better recognize pedestrians, vehicles, and road signs. However, recent research has shown that DNNs are vulnerable to adversarial examples: Adding carefully crafted adversarial perturbations to the inputs can mislead the target DNN into mislabeling them during run time. Such adversarial examples raise security and safety concerns when applying DNNs in the real world. For example, adversarially perturbed inputs could mislead the perceptual systems of an autonomous vehicle into misclassifying road signs, with potentially catastrophic consequences.
There have been several techniques proposed to generate adversarial examples and to defend against them. In this blog post we will briefly introduce state-of-the-art algorithms to generate digital adversarial examples, and discuss our algorithm to generate physical adversarial examples on real objects under varying environmental conditions. We will also provide an update on our efforts to generate physical adversarial examples for object detectors.
Drones, volcanoes and the ‘computerisation’ of the Earth
By Adam Fish
The eruption of the Agung volcano in Bali, Indonesia has been devastating, particularly for the 55,000 local people who have had to leave their homes and move into shelters. It has also played havoc with the flights in and out of the island, leaving people stranded while the experts try to work out what the volcano will do next.
But this has been a fascinating time for scholars like me who investigate the use of drones in social justice, environmental activism and crisis preparedness. The use of drones in this context is just the latest example of the “computerisation of nature” and raises questions about how reality is increasingly being constructed by software.
Amazon drone delivery is developing in the UK, drone blood delivery is happening in Rwanda, while in Indonesia people are using drones to monitor orangutan populations, map the growth and expansion of palm oil plantations and gather information that might help us predict when volcanoes such as Agung might again erupt with devastating impact.
In Bali, I have the pleasure of working with a remarkable group of drone professionals, inventors and hackers who work for Aeroterrascan, a drone company from Bandung, on the Indonesian island of Java. As part of their corporate social responsibility, they have donated their time and technologies to the Balinese emergency and crisis response teams. It’s been fascinating to participate in a project that flies remote sensing systems high in the air in order to better understand dangerous forces deep in the Earth.
I’ve been involved in two different drone volcano missions. A third mission will begin in a few days. In the first, we used drones to create an extremely accurate 3D map of the size of the volcano – down to 20cm of accuracy. With this information, we could see if the volcano was actually growing in size – key evidence that it is about to blow up.
The second mission involved flying a carbon dioxide and sulphur dioxide smelling sensor through the plume. An increase in these gases can tell us if an eruption looms. There was a high degree of carbon dioxide and that informed the government to raise the threat warning to the highest level.
In the forthcoming third mission, we will use drones to see if anyone is still in the exclusion zone so they can be found and rescued.
What is interesting to me as an anthropologist is how scientists and engineers use technologies to better understand distant processes in the atmosphere and below the Earth. It has been a difficult task, flying a drone 3,000 meters to the summit of an erupting volcano. Several different groups have tried and a few expensive drones have been lost – sacrifices to what the Balinese Hindus consider a sacred mountain.
More philosophically, I am interested in better understanding the implications of having sensor systems such as drones flying about in the air, under the seas, or on volcanic craters – basically everywhere. These tools may help us to evacuate people before a crisis but it also entails transforming organic signals into computer code. We’ve long interpreted nature through technologies that augment our senses, particularly sight. Microscopes, telescopes and binoculars have been great assets for chemistry, astronomy and biology.
The internet of nature
But the sensorification of the elements is something different. This has been called the computationalisation of Earth. We’ve heard a lot about the internet of things but this is the internet of nature. This is the surveillance state turned onto biology. The present proliferation of drones is the latest step in wiring everything on the planet. In this case, the air itself, to better understand the guts of a volcano.
These flying sensors, it is hoped, will give volcanologists what anthropologist Stephen Helmreich called abduction – or a predictive and prophetic “argument from the future”.
But the drones, sensors and software we use provide a particular and partial worldview. Looking back at today from the future, what will be the impact of increasing datafication of nature: better crop yield, emergency preparation, endangered species monitoring? Or will this quantification of the elements result in a reduction of nature to computer logic?
There is something not fully comprehended – or more ominously not comprehensible – about how flying robots and self-driving cars equipped with remote sensing systems filter the world through big data crunching algorithms capable of generating and responding to their own artificial intelligence.
These non-human others react to the world not as ecological, social, or geological processes but as functions and feature sets in databases. I am concerned by what this software view of nature will exclude, and as they remake the world in their database image, what the implications of those exclusions might be for planetary sustainability and human autonomy.
In this future world, there may be less of a difference between engineering towards nature and the engineering of nature.
Adam Fish, Senior Lecturer in Sociology and Media Studies, Lancaster University
This article was originally published on The Conversation. Read the original article.