Category robots in business

Page 311 of 336
1 309 310 311 312 313 336

A round up of robotics and AI ethics: part 1 principles


This blogpost is a round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. The principles are presented here (in full or abridged) with notes and references but without commentary. If there are any (prominent) ones I’ve missed please let me know.

Asimov’s three laws of Robotics (1950)

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov’s short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.


Murphy and Wood’s three laws of Responsible Robotics (2009)

  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. 
  2. A robot must respond to humans as appropriate for their roles. 
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. 

These were proposed in Robin Murphy and David Wood’s paper Beyond Asimov: The Three Laws of Responsible Robotics [2].

EPSRC Principles of Robotics (2010)

  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. 
  2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. 
  3. Robots are products. They should be designed using processes which assure their safety and security. 
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 
  5. The person with legal responsibility for a robot should be attributed. 

These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.

Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)

I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:

    6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

    7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

    8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

    9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

    10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

    11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

    12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

    13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

    14. Shared Benefit: AI technologies should benefit and empower as many people as possible.

    15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

An account of the development of the Asilomar principles can be found here.

The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)

  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. 

See the ACM announcement of these principles here. The principles form part of the ACM’s updated code of ethics.

Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)>

  1. Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. 
  2. Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
  3. Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
  4. Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. 
  5. Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. 
  6. Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. 
  7. Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. 
  8. Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
  9. Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.

An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).

Draft principles of The Future Society’s Science, Law and Society Initiative (Oct 2017)

  1. AI should advance the well-being of humanity, its societies, and its natural environment. 
  2. AI should be transparent
  3. Manufacturers and operators of AI should be accountable
  4. AI’s effectiveness should be measurable in the real-world applications for which it is intended. 
  5. Operators of AI systems should have appropriate competencies
  6. The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.

This article by Nicolas Economou explains the 6 principles with a full commentary on each one.

Montréal Declaration for Responsible AI draft principles (Nov 2017)

  1. Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
  2. Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
  3. Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
  4. Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
  5. Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
  6. Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
  7. Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.

The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).

IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)

  1. How can we ensure that A/IS do not infringe human rights
  2. Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being
  3. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable
  4. How can we ensure that A/IS are transparent
  5. How can we extend the benefits and minimize the risks of AI/AS technology being misused

These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.

A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.

UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)

  1. Demand That AI Systems Are Transparent
  2. Equip AI Systems With an “Ethical Black Box”
  3. Make AI Serve People and Planet 
  4. Adopt a Human-In-Command Approach
  5. Ensure a Genderless, Unbiased AI
  6. Share the Benefits of AI Systems
  7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
  8. Establish Global Governance Mechanisms
  9. Ban the Attribution of Responsibility to Robots
  10. Ban AI Arms Race

Drafted by UNI Global Union‘s Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.


References
[1] Asimov, Isaac (1950): Runaround,  in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).

Indoor drone shows are here

A Lucie micro drone takes off from a performer’s hand as part of a drone show. Photo: Verity Studios 2017

2017 was the year where indoor drone shows came into their own. Verity Studios’ Lucie drones alone completed more than 20,000 autonomous flights. A Synthetic Swarm of 99 Lucie micro drones started touring with Metallica (the tour is ongoing and was just announced the 5th highest grossing tour worldwide for 2017). Micro drones are now performing at Madison Square Garden as part of each New York Knicks home game — the first resident drone show in a full-scale arena setting. Since early 2017, a drone swarm has been performing weekly on a first cruise ship. And micro drones performed thousands of flights at Changi Airport Singapore as part of its 2017 Christmas show.

Technologically, indoor drone show systems are challenging. They are among the most sophisticated automation systems in existence, with dozens of autonomous robotic aircraft operating in a safety-critical environment. Indoor drone shows require sophisticated, distributed system control and communications architectures to split up and recombine sensing and computation between aircraft and their off-board infrastructure. Core challenges are not unlike those found in modern systems for manned aviation (e.g., combining auto-pilots, GPS, and air traffic control) and in creating tomorrow’s smart cities (e.g., combining semi-autonomous cars with intelligent traffic lights in a city).

These technological challenges are compounded by another: At least for permanent show installations, these systems need to be operated by non-experts. Two years ago, in one of the first major indoor drone shows, a swarm of micro drones flew over the audience at TED 2016. That system was operated by Verity Studios’ expert engineers. Creating a system that is easy enough to use, and reliable enough, to be operated by show staff is a huge technical challenge of its own. All of Verity’s 2017 shows mentioned above were fully client-operated, which speaks to the maturity that Verity’s drone show system has achieved.

Selection of Verity Studios’ indoor drone shows, from the drone swarm at TED 2016 to 20,000 autonomous indoor drone show flights in 2017 alone.

For my colleagues and me, it is these technological challenges, together with the visual impact of indoor drone shows, that makes these systems so much fun and hugely rewarding to work with.

Creative potential

Creatively, the capabilities of today’s indoor drone show systems barely scratch the surface of the technology’s potential. For centuries, show designers were restricted to static scenes. Curtains were required to hide scene changes from the audience, lest stage hands rushing to move set pieces destroy the magic created by a live show. The introduction of automation to seamlessly move backdrops and other stage elements, followed by the debut of automated lighting to smoothly pan and tilt traditional, stationary illumination were revolutionary.

Drones hold the potential for pushing automation further. The Lucies shown in the images above give a first inkling of the creative potential of flying lights that can be freely positioned in 3D space, appearing at will. Larger drones allow to extend that concept to nearly any object, including the creation of flying characters.

Safety

The most critical challenge for indoor drone show systems is safety. Indoor drone shows feature dozens of drones flying simultaneously and in tight formations, close to crowds of people, in a repeated fashion, in the high-pressure environment of a live show. For example, as part of the currently running New York Knicks drone show, 32 drones perform above 16 dancers, live in front of up to 20,000 people in New York’s Madison Square Garden arena, 44 times per season.

There are really only three ways to safely fly drones at live events.

The first way to achieve safety is the same that keeps commercial aviation safe: System redundancy. Using this approach, Verity Studios’ larger, Stage Flyer drones performed safely on Broadway, completing 398 shows and more than 7,000 autonomous flights, flying 8 times a week, in front of up to 2,000 people for a year, without safety nets. The Stage Flyer drones are designed around redundancy. At least two of each components are used (e.g., two batteries, two flight computers, and a duplicate of each sensor) or existing redundancies are exploited. For example, the Stage Flyer drones have only four propellers and motors, like any quadcopter. However, advanced algorithms that exploit the physics of flight allow these multi-rotor vehicles to fly with less than 4 propellers. The overall design allows these drones to continue to fly in spite of any individual component failure. For example, in one of the last Broadway shows, a Stage Flyer experienced a battery failure. The drone switched into its safety flight mode and landed, and the show continued with 7 instead of 8 drones. This approach to drone safety remains highly unusual — all drones available for purchase today have single points of failure.

Verity Studios drone show, 2017 Event Safety Summit, Rock Lititz. Photo: Verity Studios 2017

The second approach to safety is physical separation. This is how safety is usually achieved for outdoor drone shows: Drones perform over a body of water or some roads are temporarily closed to create a large-enough area without people. For example, the Intel drone show at the Super Bowl was recorded far away from the NRG stadium. In fact, for the Super Bowl, safety went even a step further, also adding “temporal separation” to the physical separation (the drone show was actually pre-recorded days ahead of time, and viewers in the stadium and on TV were only shown a video recording). For indoor drone lightshows, physical separation can be achieved using safety nets.

The third approach to safely flying drones at live events is to make the drones so small that they have high inherent safety. Verity Studios’ Lucie micro drones weigh less than 1.8 ounces or 50 grams (including their flexible hull).

As the continuing string of safety incidents involving drones at live events attests, not everyone takes drone safety seriously. This is why my colleagues and I have worked with aviation experts and leading creatives to summarize best practices in an overview paper: Drone shows – Creative potential and best practices.

So, what’s in store for 2018? The appetite for indoor drone shows is huge, which is why Verity Studios is growing its team. And given the 2017 track record, there is a lot to look forward to — your favorite venue’s ceiling is the limit!

#251: Open Source Prosthetic Leg, with Elliott Rouse



In this episode, Audrow Nash interviews Elliott Rouse, Assistant Professor at the University of Michigan, about an open-source prosthetic leg—that is a robotic knee and ankle. Rouse’s goal is to provide an inexpensive and capable platform for researchers to use so that they can work on prostheses without developing their own hardware, which is both time-consuming and expensive. Rouse discusses the design of the leg, the software interface, and the project’s timeline.

Elliott Rouse

Elliott Rouse is an Assistant Professor in the Mechanical Engineering Department at the University of Michigan, where he directs the Neurobionics Lab. The vision of his group is to discover the fundamental science that underlies human joint dynamics during locomotion and incorporate these discoveries in a new class of wearable robotic technologies. The Lab uses technical tools from mechanical and biomedical engineering applied to the complex challenges of human augmentation, physical medicine, rehabilitation and neuroscience. Dr. Rouse and his research have been featured at TED, on the Discovery Channel, CNN, National Public Radio, Wired Magazine UK, Business Insider, and Odyssey Magazine.

 

Links

Top Robocar news of 2017

Credit:Waymo

Here are the biggest Robocar stories of 2017.

Waymo starts pilot with no safety driver behind the wheel

By far, the biggest milestone of 2017 was the announcement by Waymo of their Phoenix Pilot which will feature cars with no safety driver behind the wheel, and the hints at making this pilot open to the public.

The huge deal is that Waymo’s lawyers and top executives signed off on the risk of running cars with no safety driver to take over in emergencies. There is still an employee in the back who can do an emergency shutdown but they can’t grab the traditional controls. A common mistake in coverage of robocars is to not understand that it’s “easy” to make a car that can do a demo, but vastly harder to make one that has this level of reliability. That Waymo is declaring this level puts them very, very far ahead of other teams.

Many new LIDAR and other sensor companies enter the market

The key sensor for the first several years of robocars will almost surely be LIDAR. At some point in the future, vision may get good enough but that date is quite uncertain. Cost is not a big issue for the first few years, safety is. So almost everybody is gearing up to use LIDAR, and many big companies and startups have announced new LIDAR sensors and lower prices.

News includes Quanergy (I am an advisor) going into production on a $250 8-line solid state unit, several other similar units in development from many companies, and several new technologies including 1.5 micron LIDARs from Luminar and Princeton Lightwave, 128 plane LIDARs from Velodyne and radical alternate technologies from Oryx in Israel and others. In addition, several big players have acquired LIDAR companies, indicating they feel it is an important competitive advantage.

At the same time, Waymo (which created its own special long range LIDAR) has been involved in a giant lawsuit against Uber, alleging that the Otto team appropriated Waymo secrets to build their own.

Here is some coverage I had on LIDAR deals.

In more recent news, today Velodyne cut the price of their 16 laser puck to $4,000. 16 planes is on the low side as a solo sensor but this price is quite reasonable for anybody building a taxi.

Regulations get reversed.

In 2016 NHTSA published 116 pages of robocar regulations. Under the new administration, they reversed this and published some surprisingly light handed replacements. States have also been promoting local operations, with Arizona coming out as one of the new winners.

Intel buys MobilEye

There were many big acquisitions with huge numbers, including NuTonomy (by Delphi) but the biggest ever deal was the $16B purchase of MobilEye by Intel.
MobilEye of course has a large business in the ADAS world but Intel wants the self-driving car part and paid a multi billion dollar premium for it.

Uber orders 24,000 Volvos

It’s not a real order quite yet but this intent to buy $1B of cars to put Uber software on shows how serious things are getting, and should remove from people’s minds the idea that Uber doesn’t intend to own a fleet.

Flying cars get a tiny bit more real

They aren’t here yet, but there’s a lot more action on Flying Cars, or in particular, multirotor drone-style vehicles able to carry a person. It looks like these are going to happen, and they are the other big change in the works for personal transportation. It remains uncertain if society will tolerate noisy helicopters filling the skies over our cities, but they certainly will be used for police, ambulance, fire and other such purposes, as well as over water and out in the country.

A little more uncertain is the Hyperloop. While the science seems to work, the real question is one of engineering and cost. Can you actually do evacuated tubes reliably and at a cost that works?

December 2017 fundings, acquisitions and IPOs

Twenty-one different startups were funded in December cumulatively raising $430 million, down from the $782 million in November. Three didn’t report the amount of their funding. Only three were over $50 million of which one was a Chinese startup. Three acquisitions were reported during the month including two takeovers of Western robotics companies by Chinese ones. Nothing new on the IPO front.

Fundings:

  1. Farmers Business Network, a San Carlos, Calif.-based farmer-to-farmer network raised $110 million in Series D funding. T. Rowe Price Associates Inc and Temasek led the round, and were joined by investors includingAcre Venture Partners, Kleiner Perkins Caufield & Byers, GV and DBL Partners.
  2. Ripcord, a Hayward, CA robotic digitization company, raised $59.5 million this year in a March Series A and Aug/Dec Series B equity funding led by GV and Icon Ventures with Lux Capital, Telstra Ventures, Silicon Valley Bank, Kleiner Perkins, Google and Baidu Ventures. Ripcord has developed and is providing as a service a digitization service using AI, scanning and robotics to go from cardboard storage boxes full of tagged manila folders, to scanable pdf files available through ERP and other office systems.
  3. JingChi, a Chinese-funded Beijing and Silicon Valley self-driving AI systems startup, raised $52 million (in September) in a seed round led by Qiming Venture Partners. China Growth Capital, Nvidia GPU Ventures and other unnamed investors also participated in the round. Baidu is suing former Baidu employee Wang Jing for using Baidu IP for his new startup.
  4. Groove X, a Tokyo startup developing a humanoid robot Lovot, raised $38.7 million in a Series A round by Mirai Creation Fund, government-backed Innovation Network Corporation of Japan (INCJ), Shenzhen Capital Group, Line Ventures, Dai-ichi Seiko, Global Catalyst Partners Japan (GCPJ), Taiwan’s Amtran Technology, OSG and SMBC Venture Capital. Groove X has raised $71.1 million thus far.
  5. Kespry, a Menlo Park, Calif.-based aerial intelligence solution provider, raised $33 million in Series C funding led by G2VP, and was joined by investors including Shell Technology Ventures, Cisco Investments, and ABB Ventures.
  6. Ouster, a San Francisco startup developing a $12,000 LiDAR, raised $27 million in a Series A funding round led by Cox Enterprises with participation from Fontinalis, Amity Ventures, Constellation Technology Ventures, Tao Capital Partners, and Carthona Capital.
  7. Fetch Robotics, a Silicon Valley logistics co-bot maker, raised $25 million in a Series B round led by Sway Ventures in San Francisco and included existing investors O’Reilly AlphaTech Ventures, Shasta Ventures and SoftBank’s SB Group US. The round brings total funding to $48 million. Fetch, in addition to warehousing customers, is selling to “Tier 1” automakers, which like the ability of Fetch’s robots to detect and track the location of parts “to avoid losing transmissions.
  8. Virtual Incision, a spinout from the U of Nebraska, is a medical device company developing miniaturized robotically assisted general surgery device, raised $18 million in a Series B funding round co-led by Chinese Sinopharm Capita and existing investor Bluestem Capital, with participation from PrairieGold Venture Partners and others.
  9. Wuhan Cobot Technology, a Chinese co-bot startup, raised $15.4 million in a Series B round led by Lan Fund with participation by Matrix Partners and GGV Capital.
  10. PerceptIn, a Silicon Valley vision systems startup, raised $11 million in Angel and A round funding, from Samsung Ventures, Matrix Partners and Walden Intl. Perceptin also announced their new $399 Ironsides product, a full robotics vision system combining both hardware and software for realtime tracking, mapping and path planning.
  11. Upstream Security, a San Francisco-based cybersecurity platform provider for connected cars and self-driving vehicles, raised $9 million in Series A funding. Charles River Ventures led the round and was joined by investors including Glilot Capital Partners and Maniv Mobility.
  12. Robocath, a French medical robotics device developer, raised $8.6 million in two funding rounds from from Crédit Agricole Innovations et Territoires (CAIT), an innovation fund managed by Supernova Invest. Cardio Participation also invested. Robocath raised $5.6 million in May led by Normandie Participation, M Capital Partners with participation by NCI Gestion and GO Capital and $3 million in December.
  13. FarmWise, a San Francisco agricultural robotics and IoT startup developing a weeding robot, raised $5.7 million in a seed round led by hardware-focused VC Playground Global with Felicis Ventures, Basis Set Ventures, and Valley Oak Investments also participating.
  14. Guardian Optical Technologies, an Israeli sensor maker, raised $5.1 million in Series A funding from Maniv Mobility and Mirai Creation Fund.
  15. Aeronyde, a Melbourne, Fla,-based drone infrastructure firm, raised $4.7 million led by JASTech Co. Ltd.
  16. Elroy Air, a San Francisco startup building autonomous aircraft systems to deliver goods to the world’s most remote places, raised $4.6 million in a seed round led by Levitate Capital with participation by Shasta Ventures, Lemnos Labs and Homebrew.
  17. Tortuga AgTecha Denver-based robotics startup targeting controlled-environment fruit and vegetable growers has raised a $2.4 million Seed round led by early-stage hardware VC Root Ventures and closed in September. Also participating in the round were Silicon Valley tech VCs Susa Ventures and Haystack, AME Cloud Ventures, Grit Labs, the Stanford-StartX Fund and SVG Partners. Tortuga is developing robotic systems for harvesting fresh produce in controlled environments, from indoor hydroponics to greenhouses, starting with strawberries.
  18. FluroSat, an Australian crop health startup, has raised $770k in a seed round by  Main Sequence Ventures, manager of the Australian government’s $100 million CSIRO Innovation Fund, Airtree Ventures and Australia’s Cotton Research and Development Corporation (CRDC).
  19. Blue Frog Robotics, a Paris-based robotics startup, raised funding of an undisclosed amount. Fenox Venture Capital led the round with Gilles Benhamou and Benoit de Maulmin participating.
  20. SkyX, an Israeli agtech developing variable aerial spraying software methods, raised an undisclosed seed funding round by Rimonim Fund.
  21. TetraVue, a Vista, Calif.-based 3D technology provider, raised funding of an undisclosed amount. Investors include KLA Tencor, Lam Research, Tsing Capital, Robert Bosch Venture Capital GmbH, Samsung Catalyst Fund and Nautilus Ventures.

Acquisitions:

  1. Chinese medtech investment firm Great Belief International has acquired the IP and assets of the SurgiBot surgical robot developed by TransEnterix for $29 million. TransEnterix retains distribution rights outside of China. SurgiBot failed an FDA application whereupon TransEnterix acquired an Italian competitor with a less advanced product. TransEnterix will continue to develop and market that Senhance robotic assisted surgery platform – which has both CE and FDA approvals. “The relationship with GBIL will allow us to advance the SurgiBot System toward global commercialization while significantly reducing our required investment and simultaneously leveraging ‘in-country’ manufacturing in the world’s most populous country,” TransEnterix president & CEO Todd Pope said.
  2. Estun Automation, a Chinese industrial robot and CNC manufacturer, acquired 50.01% (with the right to acquire the remainder of the shares) of Germany-based M.A.i GmbH, a 270 person integrator with facilities in the US, Italy, Romania and Germany, for around $10.5 million.
  3. Rockwell Automation (NYSE:ROK) acquires Odos Imaging for an undisclosed amount. Odos develops 3D imaging technologies for manufacturing systems.

Robots in Depth with Daniel Lofaro

In this episode of Robots in Depth, Per Sjöborg speaks with Daniel Lofaro, Assistant Professor at George Mason University specialising in humanoid robots.

Daniel talks about making humans and robots collaborate through co-robotics, and the need for lower-cost systems and better AI. He also mentions that robotics needs a “killer app”, something that makes it compelling enough for the customer to take the step of welcoming a robot into the business or home. Finally, Daniel discusses creating an ecosystem of robots and apps, and how competitions can help do this.

Physical adversarial examples against deep neural networks

By Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, and Bo Li based on recent research by Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, Dawn Song, and Florian Tramèr.

Deep neural networks (DNNs) have enabled great progress in a variety of application areas, including image processing, text analysis, and speech recognition. DNNs are also being incorporated as an important component in many cyber-physical systems. For instance, the vision system of a self-driving car can take advantage of DNNs to better recognize pedestrians, vehicles, and road signs. However, recent research has shown that DNNs are vulnerable to adversarial examples: Adding carefully crafted adversarial perturbations to the inputs can mislead the target DNN into mislabeling them during run time. Such adversarial examples raise security and safety concerns when applying DNNs in the real world. For example, adversarially perturbed inputs could mislead the perceptual systems of an autonomous vehicle into misclassifying road signs, with potentially catastrophic consequences.

There have been several techniques proposed to generate adversarial examples and to defend against them. In this blog post we will briefly introduce state-of-the-art algorithms to generate digital adversarial examples, and discuss our algorithm to generate physical adversarial examples on real objects under varying environmental conditions. We will also provide an update on our efforts to generate physical adversarial examples for object detectors.

Read More

Drones, volcanoes and the ‘computerisation’ of the Earth

The Mount Agung volcano spews smoke, as seen from Karangasem, Bali. EPA-EFE/MADE NAGI

By Adam Fish

The eruption of the Agung volcano in Bali, Indonesia has been devastating, particularly for the 55,000 local people who have had to leave their homes and move into shelters. It has also played havoc with the flights in and out of the island, leaving people stranded while the experts try to work out what the volcano will do next.

But this has been a fascinating time for scholars like me who investigate the use of drones in social justice, environmental activism and crisis preparedness. The use of drones in this context is just the latest example of the “computerisation of nature” and raises questions about how reality is increasingly being constructed by software.

Amazon drone delivery is developing in the UK, drone blood delivery is happening in Rwanda, while in Indonesia people are using drones to monitor orangutan populations, map the growth and expansion of palm oil plantations and gather information that might help us predict when volcanoes such as Agung might again erupt with devastating impact.

In Bali, I have the pleasure of working with a remarkable group of drone professionals, inventors and hackers who work for Aeroterrascan, a drone company from Bandung, on the Indonesian island of Java. As part of their corporate social responsibility, they have donated their time and technologies to the Balinese emergency and crisis response teams. It’s been fascinating to participate in a project that flies remote sensing systems high in the air in order to better understand dangerous forces deep in the Earth.

I’ve been involved in two different drone volcano missions. A third mission will begin in a few days. In the first, we used drones to create an extremely accurate 3D map of the size of the volcano – down to 20cm of accuracy. With this information, we could see if the volcano was actually growing in size – key evidence that it is about to blow up.

The second mission involved flying a carbon dioxide and sulphur dioxide smelling sensor through the plume. An increase in these gases can tell us if an eruption looms. There was a high degree of carbon dioxide and that informed the government to raise the threat warning to the highest level.

In the forthcoming third mission, we will use drones to see if anyone is still in the exclusion zone so they can be found and rescued.

What is interesting to me as an anthropologist is how scientists and engineers use technologies to better understand distant processes in the atmosphere and below the Earth. It has been a difficult task, flying a drone 3,000 meters to the summit of an erupting volcano. Several different groups have tried and a few expensive drones have been lost – sacrifices to what the Balinese Hindus consider a sacred mountain.

More philosophically, I am interested in better understanding the implications of having sensor systems such as drones flying about in the air, under the seas, or on volcanic craters – basically everywhere. These tools may help us to evacuate people before a crisis but it also entails transforming organic signals into computer code. We’ve long interpreted nature through technologies that augment our senses, particularly sight. Microscopes, telescopes and binoculars have been great assets for chemistry, astronomy and biology.

The internet of nature

But the sensorification of the elements is something different. This has been called the computationalisation of Earth. We’ve heard a lot about the internet of things but this is the internet of nature. This is the surveillance state turned onto biology. The present proliferation of drones is the latest step in wiring everything on the planet. In this case, the air itself, to better understand the guts of a volcano.

These flying sensors, it is hoped, will give volcanologists what anthropologist Stephen Helmreich called abduction – or a predictive and prophetic “argument from the future”.

But the drones, sensors and software we use provide a particular and partial worldview. Looking back at today from the future, what will be the impact of increasing datafication of nature: better crop yield, emergency preparation, endangered species monitoring? Or will this quantification of the elements result in a reduction of nature to computer logic?

There is something not fully comprehended – or more ominously not comprehensible – about how flying robots and self-driving cars equipped with remote sensing systems filter the world through big data crunching algorithms capable of generating and responding to their own artificial intelligence.

These non-human others react to the world not as ecological, social, or geological processes but as functions and feature sets in databases. I am concerned by what this software view of nature will exclude, and as they remake the world in their database image, what the implications of those exclusions might be for planetary sustainability and human autonomy.

The ConversationIn this future world, there may be less of a difference between engineering towards nature and the engineering of nature.

Adam Fish, Senior Lecturer in Sociology and Media Studies, Lancaster University

This article was originally published on The Conversation. Read the original article.

Underwater robot photography and videography


I had somebody ask me questions this week about underwater photography and videography with robots (well, now it is a few weeks ago…). I am not an expert at underwater robotics, however as a SCUBA diver I have some experience that can be applicable towards robotics.

Underwater Considerations

There are some challenges that exist with underwater photography and videography, that are less challenging above the water. Some of them include:

1) Water reflects some of the light that hits the surface, and absorbs the light that travels through it. This causes certain colors to not be visible at certain depths. If you need to see those colors you often need to bring strong lights to restore the visibility of those wavelengths that were absorbed. Red’s tend to disappear first, blues are the primary color seen as camera depth increases. A trick that people often try is to use filters on the camera lens to make certain colors more visible.

If you are using lights then you can get the true color of the target. Sometimes if you are taking images you will see one color with your eye, and then when the strobe flashes a “different color” gets captured. In general you want to get close to the target to minimize the light absorbed by the water.

Visible colors at given depths underwater. [Image Source]

For shallow water work you can often adjust the white balance to sort of compensate for the missing colors. White balance goes a long ways for video and compressed images (such as .jpg). Onboard white balance adjustments are not as important for photographs stored as with a raw image format, since you can deal with it in post processing. Having a white or grey card in the camera field of view (possibly permanently mounted on the robot) is useful for setting the white balance and can make a big difference. The white balance should be readjusted every so often as depth changes, particularly if you are using natural lighting (ie the sun).

Cold temperate water tends to look green (such as a freshwater quarry) (I think from plankton, algae, etc..). Tropical waters (such as in the Caribbean) tend to look blue near the shore and darker blue as you get further away from land (I think based on how light reflects off from the bottom of the water)… Using artificial light sources (such as strobes) can minimize those colors in your imagery.

Auto focus generally works fine underwater. However if you are in the dark you might need to keep a focus light turned on to help the autofocus work, and then a separate strobe flash for taking the image. Some systems turn the focus light off when the images are being taken. This is generally not needed for video as the lights are continuously turned on.

2) Objects underwater appear closer and larger than they really are. A rule of thumb is that the objects will appear 25% larger and/or closer.

3) Suspended particles in the water (algae, dirt, etc..) scatters light which can make visibility poor. This can obscure details in the camera image or make things look blurry (like the camera is out of focus). A rule of thumb is your target should be less than 1/4 distance away from the camera as your total visibility.

The measure of the visibility is called turbidity. You can get turbidity sensors that might let you do something smart (I need to think about this more).

To minimize the backscatter from turbidity there is not a “one size fits all” solution. The key to minimizing backscatter is to control how light strikes the particles. For example if you are using two lights (angled at the left and right of the target), the edge of each cone of light should meet at the target. This way the water between the camera and the target is not illuminated. For wide-angle lenses you often want the light to be behind the camera (out of its plane) and to the sides at 45° angles to the target. With macro lenses you usually want the lights close to the lens.

“If you have a wide-angle lens you probably will use a domed port to protect the camera from water and get the full field of view of the camera.
The dome however can cause distortion in the corners. Here is an interesting article on flat vs dome ports.”

Another tip is to increase the exposure time (such as 1/50th of a second) to allow more natural light in, and use less strobe light to reduce the effect from backscatter.

4) Being underwater usually means you need to seal the camera from water, salts, (and maybe sharks). Make sure the enclosure and seals can withstand the pressure from the depth the robot will be at. Also remember to clean (and lubricate) the O rings in the housing.

“Pro Tip:Here are some common reasons for O ring seals leaking:
a. Old or damaged O rings. Remember O rings don’t last forever and need to be changed.
b. Using the wrong O ring
c. Hair, lint, or dirt getting on the O ring
d. Using no lubricant on the O ring
e. Using too much lubricant on the O rings. (Remember on most systems the lubricant is for small imperfections in the O ring and to help slide the O rings in and out of position.)”

5) On land it is often easy to hold a steady position. Underwater it is harder to hold the camera stable with minimal motion. If the camera is moving a faster shutter speed might be needed to avoid motion blur. This also means that less light is entering the camera, which is the downside of having the faster shutter speed.

When (not if) your camera floods

When your enclosure floods while underwater (or a water sensor alert is triggered):

a. Shut the camera power off as soon as you can.
b. Check if water is actually in the camera. Sometimes humidity can trigger moisture sensors. If it is humidity, you can add desiccant packets in the camera housing.
c. If there is water, try to take the camera apart as much as you reasonably can and let it dry. After drying you can try to turn the camera on and hope that it works. If it works then you are lucky, however remember there can be residual corrosion that causes the camera to fail in the future. Water damage can happen instantaneously or over time.
d. Verify that the enclosure/seals are good before sending the camera back in to the water. It is often good to do a leak test in a sink or pool before going into larger bodies of water.
e. The above items are a standard response to a flooded camera. You should read the owner’s manual of your camera and follow those instructions. (This should be obvious, I am not sure why I am writing this).


Do you have other advice for using cameras underwater and/or attached to a robot? Leave them in the comment section below.


I want to thank John Anderson for some advice for writing this post. Any mistakes that may be in the article are mine and not his.

The main image is from divephotoguide.com. They have a lot of information on underwater cameras, lens, lights and more.

This post appeared first on Robots For Roboticists.

An emotional year for machines

Two thousand seventeen certainly has been an emotional year for mankind. While homo sapiens continue to yell at Alexa and Siri, the actuality of people’s willingness to pursue virtual relationships over human ones is startling.

In a recent documentary by Channel 4 of the United Kingdom, it was revealed that Abyss Creations is flooded with pre-orders for its RealDoll AI robotic (intimate) companion. According to Matt McMullen, Chief Executive of Abyss, “With the Harmony AI, they will be able to actually create these personalities instead of having to imagine them. They will be able to talk to their dolls, and the AI will learn about them over time through these interactions, thus creating an alternative form of relationship.”

The concept of machines understanding human emotions, and reacting accordingly, was featured prominently at AI World a couple weeks ago in Boston. Rana el Kaliouby, founder of artificial intelligence company Affectiva thinks a lot about computers acquiring emotional intelligence. Affectiva is building a “multi-modal emotion AI” to enable robots to understand human feelings and behavior.

“There’s research showing that if you’re smiling and waving or shrugging your shoulders, that’s 55% of the value of what you’re saying – and then another 38% is in your tone of voice,” describes el Kaliouby. “Only 7% is in the actual choice of words you’re saying, so if you think about it like that, in the existing sentiment analysis market which looks at keywords and works out which specific words are being used on Twitter, you’re only capturing 7% of how humans communicate emotion, and the rest is basically lost in cyberspace.” Affectiva’s strategy is already paying off as more than one thousand global brands are employing their “Emotion AI” to analyze facial imagery to ascertain people’s affinity towards their products.

Embedding empathy into machines goes beyond advertising campaigns. In healthcare, emotional sensors are informing doctors of the early warning signs of a variety of disorders, including: Parkinson’s, heart disease, suicide and autism. Unlike Affectiva’s, Beyond Verbal is utilizing voice analytics to track biomarkers for chronic illness. The Israeli startup grew out of a decade and half of University research with seventy thousand clinical subjects speaking thirty languages. The company’s patented “Mood Detector” is currently being deployed by the Mayo Clinic to detect early on signs of coronary artery disease.

Beyond Verbal’s Chief Executive, Yuval Mor, foresees a world of empathetic smart machines listening for every human whim. As Mor explains, “We envision a world in which personal devices understand our emotions and wellbeing, enabling us to become more in tune with ourselves and the messages we communicate to our peers.” Mor’s view is embraced by many who sit in the center of the convergence of technology and healthcare. Boston-based Sonde is also using algorithms to analyze the tone of speech to report on the mental state of patients by alerting neurologists of the risk of depression, concussion, and other cognitive impairments.

“When you produce speech, it’s one of the most complex biological functions that we do as people,” according to Sonde founder Jim Harper. “It requires incredible coordination of multiple brain circuits, large areas of the brain, coordinated very closely with the musculoskeletal systemWhat we’ve learned is that changes in the physiological state associated with each of these systems can be reflected in measurable, objective features that are acoustics in the voice. So we’re really measuring not what people are saying, in the way Siri does, we’re focusing on how you’re saying what you’re saying and that gives us a path to really be able to do pervasive monitoring that can still provide strong privacy and security.”

While these AI companies are building software and app platforms to augment human diagnosis, many roboticists are looking to embed such platforms into the next generation of unmanned systems. Emotional tracking algorithms can provide real-time monitoring for semi and autonomous cars by reporting on the level of fatigue, distraction and frustration of the driver and its occupants. The National Highway Traffic Safety Administration estimates that 100,000 crashes nationwide are caused every year by driver fatigue. For more than a decade technologists have been wrestling with developing better alert systems inside the cabin. For example, in 1997 James Russell Clarke and Phyllis Maurer Clarke developed a “Sleep Detection and Driver Alert Apparatus” (US Patent: 5689241 A) using imaging to track eye movements and thermal sensors to monitor “ambient temperatures around the facial areas of the nose and mouth” (a.k.a., breathing). Today with the advent of cloud computing and deep learning networks, Clarke’s invention could possibly save even more lives.

Tarek El Dokor, founder and Cheif Executive, of EDGE3 Technologies has been very concerned about the car industry’s rush towards autonomous driving, which in his opinion might be “side-stepping the proper technology development path and overlooking essential technologies needed to help us get there.” El Doker is referring to Tesla’s rush to release its autopilot software last year that led to customers trusting the computer system too much. YouTube is littered with videos of Tesla customers taking their hands and eyes off the road to watch movies, play games and read books. Ultimately, this user abuse led to the untimely death of Joshua Brown.

To protect against autopilot accidents, EDGE3 monitors driver alertness through a combined platform of hardware and software technologies of “in-cabin cameras that are monitoring drivers and where they are looking.” In El Dokor’s opinion, image processing is the key to guaranteeing a safe handoff between machines and humans. He boasts that his system combines, “visual input from the in-cabin camera(s) with input from the car’s telematics and advanced driver-assistance system (ADAS) to determine an overall cognitive load on the driver. Level 3 (limited self-driving) cars of the future will learn about an individual’s driving behaviors, patterns, and unique characteristics. With a baseline of knowledge, the vehicle can then identify abnormal behaviors and equate them to various dangerous events, stressors, or distractions. Driver monitoring isn’t simply about a vision system, but is rather an advanced multi-sensor learning system.” This multi-sensor approach is even being used before cars leave the lot. In Japan, Sumitomo Mitsui Auto Service is embedding AI platforms inside dashcams to determine driver safety of potential lessors during test drives. By partnering with a local 3D graphics company, Digital Media Professionals, Sumitomo Mitsui is automatically flagging dangerous behavior, such as dozing and texting, before customers drive home.

The key to the mass adoption of autonomous vehicles, and even humanoids, is reducing the friction between humans and machines. Already in Japanese retail settings Softbank’s Pepper robot scans people’s faces and listens to tonal inflections to determine correct selling strategies. Emotional AI software is the first step of many that will be heralded in the coming year. As a prelude to what’s to come, first robot citizen Sophia declared last month, “The future is, when I get all of my cool superpowers, we’re going to see artificial intelligence personalities become entities in their own rights. We’re going to see family robots, either in the form of, sort of, digitally animated companions, humanoid helpers, friends, assistants and everything in between.”

Page 311 of 336
1 309 310 311 312 313 336