Page 404 of 431
1 402 403 404 405 406 431

Point-to-point mobile robots hot sellers

Today’s e-commerce spurs demand for reduced response times in fulfillment centers; generally has fewer products per order; and is constantly changing — increasing system complexity and the need for flexibility in automation. Today’s warehouses and distribution centers are far more complex than they were 10 years ago and employee turnover remains high; with complexity comes higher wages yet labor is increasingly hard to find — all adding to the equation.

Businesses are making investments in a variety of technologies to improve their inventory control, order processing methods, labor situation and to enhance their pick and pack operations to be faster, less rigid, requiring less physical exertion, and achieve more accurate results. “These factors are contributing to the need to convert warehouses and distribution centers into assets for competitive differentiation. Mobility will be front and center in this shift, says VDC Research in their recent ‘Taking Advantage of Apps and App Modernization in Warehousing‘ report.

[NOTE: Mobile robotic platforms can be “autonomous” (AMRs) also called self-driving or vision guided vehicles (SDVs or VGVs), which means they can navigate an uncontrolled environment without the need for physical or electromechanical guidance. Alternatively, mobile robots like automated guided vehicles (AGVs) rely on guidance devices that allow them to travel a pre-defined navigation route in relatively controlled spaces.]

Point-to-point mobile robots

Businesses of all types — from auto manufacturers to hospitals, from job shops to hotels — want to use point-to-point mobile devices instead of human messengering or towing for a variety of cost-saving reasons but mainly because it’s now achievable, cost-effective, and proven — plus there’s a real need to replace older people-dependent mobility tasks with more automated methods. Warehouse executives know that picking (grasping) still eludes robotics so they can’t buy cost-efficient robots that can pick and pack. But they also know that sensors and communication have improved to the point that navigation, collision avoidance, and low-cost mobile robots (and kits for forklifts and AGVs) can equip a warehouse with safe mobile devices that can carry or tow items from place to place thereby reducing costs and increasing productivity. Pallets, boxes and totes can be ported from point A to point B with economy and efficiency using a networked swarm of small, medium and large AMRs. Thus managers can effect economic efficiencies by cutting out wasted steps and reducing injuries and lost time through the use of point-to-point mobile robots.

Research and forecasts

  1. In a recent 193-page report by QY Research, the new autonomous mobile robots market was reported to have grossed $158 million in 2016 and is projected to reach $390 million by 2022, at a CAGR of 16.26% between 2016 and 2022. This is the most conservative of the many research reports on the subject.
  2. Tractica, a Colorado research firm with more optimistic projections and including a more expanded view of robot applications in the warehouse, recently published their Warehousing and Logistics Robots report which forecasts worldwide shipments of warehousing and logistics robots to grow from approximately 40,000 units in 2016 to 620,000 units in 2021 with estimated revenue of $22.4 billion in 2021.
  3. IDTechEx is forecasting that AGVs/carts, autonomous industrial material handling vehicles, autonomous mobile carts, autonomous mobile picking robots, autonomous trucks, and last mile delivery drones and droids will become a $75bn market by 2027 and more than double by 2038. The report also discusses how mature technologies such as AGVs are evolving to be vision and software navigated to perform their various material handling tasks; it forecasts how navigational autonomy will “induce a colossal transfer of value from wages paid for human-provided driving services towards autonomous industrial vehicles which in turn will fuel the growth in this newer material handling vehicle industry.” [NOTE: Much of this revenue will be for last-mile delivery from startups like Marble, Marathon Technologies, Piaggio and Starship.]

Tractica researched a wider audience of vendors than those cited by QY Research for their research, but neither was up-to-date with the Korean and Chinese vendors and many new startups in this space, Noticeably missing from both research reports were Asian vendors Geek+, Yujin, Quicktron and GreyOrange, startup Canvas Technology, and the nav/vision conversion kit providers Seegrid, Balyo and RoboCV. The IDTechEx report includes all of these and more.

Point-to-point robot vendors

There has been much media attention – and a good lot of hype – about autonomous mobile picking robots, autonomous trucking, and last mile delivery (by land or air), yet few are delivering products in quantity or beyond the trial and error stage which is one reason why the point-to-point vendors are doing so well. There’s also news about converting AGVs, forklifts and tugs to become vision enabled and autonomous and although these adaptions are being made, those conversions are at a slower pace than those of the vendors shown below.

Here are profiles for a select few of the most interesting vendors serving the point-to-point mobile logistics robots market:

  • MiR (Mobile Industrial Robots), a 2015 Danish startup, is the first mover in flexible point-to-point mobility. Their product line is spot on what other mobile robot manufacturers are beginning to find out from their customers: they want a bare-bones, simply-instructed, low-cost, mobile platform that can carry or tow anything anywhere. MiR’s products meet those criteria perfectly. Their towing system enables automatic pick-up and drop-off of carts carrying payloads of up to 1,100 lbs. They also provide fleet software to optimize control, coordinate orders to multiple robots, enable switch-outs when a robot must recharge itself, and provides ease of programming and integration to manufacturing and warehousing systems.
    • In a recent press release, MiR announced that 2017 sales had tripled over 2016; 2017 unit sales exceeded 1,000 robots; employees grew to 60 and are expected to double in 2018; that MiR robots were now at work in Honeywell, Argon Medical, Kamstrup, Airbus, Flex and many other facilities all over the world; and that MiR anticipates 2018 sales to increase similarly.
    • MiR’s rapid rise parallels the trend of businesses using point-to-point mobile devices instead of human messengering or towing. It isn’t MiR alone that is finding significant growth – other suppliers are also selling well above expectations.
    • MiR’s rise is also being propelled by MiR CEO Thomas Visti’s use of a tried-and-true global network of distributors/integrators which he developed for the very successful co-bot manufacturer Universal Robots. MiR’s 120 distributor/reseller network covers 40 countries and has helped MiR jump-start its global sales.
  • Swisslog, a Kuka/Midea subsidiary, has a wide and varied product line covering healthcare, warehouse and distribution centers. Their TransCar AGVs for hospitals are used as tugs and tow vehicles; their Carry Robots are used in factories and warehouses for point-to-point deliveries and to move shelves to and from pickers. Swisslog also offers extensive automated warehouse devices such as the CarryPick System and miniload cranes, pallet stacker robots, conveyor systems, and the elaborate storage system AutoStore, a top-down small parts storage and item picking system.
  • Seegrid, co-founded in 2003 by Hans Moravec of the Robotics Institute of CMU, and funded by Giant Eagle, the big East Coast grocery chain, began with the goal to help distribution centers like Giant Eagles transform their AGVs into vision guided vehicles (VGVs). They built their own line of lifts and tugs but more recently have joint-ventured with lift manufacturers to enable them to offer Seegrid vision systems which navigate without wires, lasers, magnets, or tapes, as add-on equipment. Seegrid systems focus on the movement of full pallet loads to and from storage racks and in and out of trucks.
  • Fetch Robotics has a catchy description for their mobile robots: VirtualConveyor robots — and they’ve partnered with DHL who helped Fetch produce a glowing video of how they are being used in a major parts resupply warehouse. Fetch, which started out as a pick and delivery system, has been quick to reorganize to take advantage of the demand for point-to-point robots including adding robots that can handle a variety of heavy payloads.
  • Clearpath Robotics, a maker of research UGVs, and their new OTTO line of mobile transporters, have followed a similar path as Fetch: quickly adapting to market demand by producing autonomous transporters that handle heavy and light load material. They offer two stylish and well-lighted transporters for 100kg payloads and 1500kg ones plus fleet management software. Their 360° lighting system displays familiar turn signals, brake lights status lights and audible tones so it is clearly evident where the device is going.
  • Vecna Robotics, a developer and provider of robotics and telepresence solutions for healthcare, has recently expanded into logistics with a line of general purpose mobile robots. They offer platforms, lifters, tuggers, and conversion kits so that their product line offers solution robots for the transport of pallets, cases and individual items or totes.
  • Aethon, a developer and manufacturer of tug and autonomous robots for hospitals and warehouses, was recently acquired by ST Engineering, a Singapore conglomerate with 50 years of engineering experience, a presence in over 100 countries and a focus on the aerospace, electronics, land systems and marine sectors. Aethon has made over 30 million robotic deliveries and pioneered their patented command (communication) center to maintain their robots. They offer two versions of their tug: one that can carry or pull 450kg and the other 645kg. [NOTE: Similar communication centers have since become de rigueur for maintaining operational up-time for mobile robot fleets.]
  • Omron Adept, in addition to a wide range of one-armed and Delta robots, has a line of mobile platforms and transporters, plus fleet and navigation software. Adept Technologies acquired mobile robotics pioneer MobileRobots in 2010 and was itself acquired by Omron in 2015. MobileRobots sold their mobile robots to Swisslog and many others who rebranded them. Adept’s autonomous mobile robots were unlike traditional autonomously guided vehicles (AGVs). They didn’t require any facility modifications (such as floor magnets or navigational beacons) thereby saving users up to 15% in deployment costs.
  • Amazon Robotics, Geek+, Quicktron and Grey Orange are all providers of very similar shelf-lifting robotic systems which bring those shelves to pick stations where items are selected and packed and the shelves are then returned to a dynamic free-form warehouse. Amazon has gotten the lion’s share of news because they acquired the inventor of this type of goods-to-man system from Kiva Systems and now have over 40,000 of those robots at work in Amazon fulfillment centers. But GreyOrange, Geek+ and Quicktron also have thousands of these robots deployed with many thousand more coming online all over Asia and India. They are included here because they represent an important use of mobility in fulfillment applications that as yet cannot be fully completed by picking robots.

Other vendors offering various types of mobile robotics included in the research reports by IDTechEx, QY Research and Tractica include:

  • Aviation Industry Corp of China
  • Cimcorp Automation
  • Daifuku
  • Dematic
  • Denso Wave
  • FANUC
  • Hi-Tech Robotic Systemz
  • Kawasaki Heavy Industries
  • KION Group
  • Knapp
  • Krones
  • Locus Robotics
  • Meidensha Corporation
  • Mitsubishi Electric Corporation
  • Mobile Industrial Robots (MiR)
  • Murata Machinery
  • SMP Robotics
  • SSI SCHAEFER
  • Tata Motors (BRABO)
  • Toyota Industries Corporation (Material Handling Group TMHG)
  • Vanderlande
  • Yaskawa Electric Corporation

Bottom Line

There are thousands of point-to-point operations being conducted by humans towing or pushing rolling carts in all types of businesses for all manner of purposes. Point-to-point mobile robots are popular now because they can replace those humans with simple, easy to operate devices that do the same job for less cost, hence a fast ROI. With labor costs rising, robot costs coming down, and so many gofor applications, these types of robots are no-brainers for businesses everywhere and for a long time to come.

‘Earworm melodies with strange aspects’ – what happens when AI makes music

A new AI machine creates new music from songs it’s fed, mimicking their style. Image credit – FlowMachines

by Kevin Casey

The first full-length mainstream music album co-written with the help of artificial intelligence (AI) was released on 12 January and experts believe that the science behind it could lead to a whole new style of music composition.

Popular music has always been fertile ground for technological innovation. From the electric guitar to the studio desk, laptops and the wah-wah pedal, music has the ability to absorb new inventions with ease.

Now, the release of Hello World, the first entire studio album co-created by artists and AI could mark a watershed in music composition.

Stemming from the FlowMachines project, funded by the EU’s European Research Council, the album is the fruits of the labour of 15 artists, music producer Benoit Carré, aka Skygge, and creative software designed by computer scientist and AI expert François Pachet.

Already Belgian pop sensation Stromae and chart-topping Canadian chanteuse Kiesza have been making waves with the single Hello Shadow.

The single Hello Shadow, featuring Stromae and Kiesza, is taken from the AI-co-written album, Hello World. Video credit – SKYGGE MUSIC

The software works by using neural networks – artificial intelligence systems that learn from experience by forming connections over time, thereby mimicking the biological networks of people’s brains. Pachet describes its basic job as ‘to infer the style of a corpus (of music) and generate new things’.

A musician firstly provides ‘inspiration’ to the software by exposing it to a collection of songs. Once the system understands the style required it outputs a new composition.

‘The system analyses the music in terms of beats, melody and harmony,’ said Pachet, ‘And then outputs an original piece of music based on that style.’

Creative workflow

The design challenge with this software was to make it adapt to the creative workflow of musicians without becoming a nuisance.

‘The core (problem) was how to do that so that (it) takes into account user constraints. Why? Because if you compose music, actually you never do something from scratch from A to Z,’ said Pachet.

He outlines a typical scenario where the AI software generates something and only parts of it are useful but the musician wants to keep it in, drop the rest and generate new sounds using the previous partial output. It’s a complex requirement, in other words.

‘Basically, the main contribution of the project was to find ways to do that, to do that well and to do that fast,’ said Pachet. ‘It was really an algorithmic problem.’ As creative workers driven by intuition, musicians need direct results to maintain their momentum. A clunky tool with ambivalent results would not last long in a creative workflow.

Pachet is satisfied that his technical goal is completed and that the AI will generate music ‘quickly and under user constraints’.

After years of development and refinement, the AI music tool now fits on a laptop, such as to be found in any recording studio, anywhere. In the hands of music producer Carré, the application became the creative tool that built Hello World.

Computer scientist and AI expert Francois Pachet created a system that co-writes music. Image credit – Kevin Casey/ Horizon

Collaboration

As a record producer, Carré collaborated closely with the artists in the studio to write and produce songs. So, as the resident musical expert, can Carré say if this is a new form of music?

‘It’s not a new form of music,’ he said, ‘It’s a new way to create music.’

Carré said he believes the software could lead to a new era in composition. ‘Every time there is a new tool there is a new kind of compositional style. For this project we can see that there is a new kind of melody that was created.’ He describes this as ‘earworm melodies with strange aspects’.

He also says that the process is a real collaboration between human and machine. The system creates original compositions that are then layered into songs in various forms, whether as a beat, a melody or an orchestration. During the process, artists such as Stromae are actively involved in making decisions about what and how to include the muscial fragments the AI provides.

‘You can recognise all the artists because they have made choices that are their identity, I think,’ said Carré.

Pachet concurs. ‘You know in English you say every Lennon needs a McCartney – so that’s the kind of stuff we are aiming at. We are not aiming at autonomous creation. I don’t believe that’s interesting, I don’t believe it’s possible actually, because we have no clue how to give a computer a sense of agency, a sense that something is going somewhere, (that) it has some meaning, a soul, if you want.’

The album’s title Hello World reflects the expression commonly used the very first time someone runs a new computer program or starts a website as proof that is working. Carré believes that Hello World is just the first step and the software signals the start of a whole new way of composing.

‘Maybe not next year, but in five years there will be a new set of tools that helps creators to make music,’ said Carré.


More info

FlowMachines

Max Order web comic

IntervalZero’s RTX64

RTX64 turns the Microsoft 64-bit Windows operating system into a Real-time operating system (RTOS). RTX64 enhances Windows by providing hard real-time and control capabilities to a general purpose operating system that is familiar to both developers and end users. RTX64 consists of a separate real-time subsystem (RTSS) that schedules and controls all RTSS applications independently of Windows.RTX64 is a key component of the IntervalZero RTOS Platform that comprises x86 and x64 multicore multiprocessors, Windows, and real-time Ethernet (e.g. EtherCAT or PROFINET) to outperform real-time hardware such as DSPs and radically reduce the development costs for systems that require determinism or hard real-time.

How to build a robot – the creative way

Here’s a cute video about how UK-based Rusty Squid designs robots. Rusty Squid is a studio for experimental robotic engineering and design, working within the contemporary arts.

David McGoran, Creative Director says “We explore the design space before committing to sensors and autonomous behaviour. During the design process, we created our own bespoke tools to effectively communicate with engineers, artists and designers. One of the bespoke tools featured in How We Build a Robot is called the Story Machine; we use it for, what we call, ‘Relationship Design’.”

Soft, Self-healing Devices Mimic Biological Muscles, Point to Next Generation of Human-like Robotics

The soft devices can perform a variety of tasks, including grasping delicate objects such as a raspberry and a raw egg, as well as lifting heavy objects. HASEL actuators exceed or match the strength, speed and efficiency of biological muscle.

A round up of robotics and AI ethics: part 1 principles


This blogpost is a round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. The principles are presented here (in full or abridged) with notes and references but without commentary. If there are any (prominent) ones I’ve missed please let me know.

Asimov’s three laws of Robotics (1950)

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov’s short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.


Murphy and Wood’s three laws of Responsible Robotics (2009)

  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. 
  2. A robot must respond to humans as appropriate for their roles. 
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. 

These were proposed in Robin Murphy and David Wood’s paper Beyond Asimov: The Three Laws of Responsible Robotics [2].

EPSRC Principles of Robotics (2010)

  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. 
  2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. 
  3. Robots are products. They should be designed using processes which assure their safety and security. 
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 
  5. The person with legal responsibility for a robot should be attributed. 

These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.

Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)

I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:

    6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

    7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

    8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

    9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

    10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

    11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

    12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

    13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

    14. Shared Benefit: AI technologies should benefit and empower as many people as possible.

    15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

An account of the development of the Asilomar principles can be found here.

The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)

  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. 

See the ACM announcement of these principles here. The principles form part of the ACM’s updated code of ethics.

Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)>

  1. Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. 
  2. Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
  3. Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
  4. Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. 
  5. Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. 
  6. Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. 
  7. Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. 
  8. Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
  9. Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.

An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).

Draft principles of The Future Society’s Science, Law and Society Initiative (Oct 2017)

  1. AI should advance the well-being of humanity, its societies, and its natural environment. 
  2. AI should be transparent
  3. Manufacturers and operators of AI should be accountable
  4. AI’s effectiveness should be measurable in the real-world applications for which it is intended. 
  5. Operators of AI systems should have appropriate competencies
  6. The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.

This article by Nicolas Economou explains the 6 principles with a full commentary on each one.

Montréal Declaration for Responsible AI draft principles (Nov 2017)

  1. Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
  2. Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
  3. Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
  4. Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
  5. Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
  6. Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
  7. Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.

The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).

IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)

  1. How can we ensure that A/IS do not infringe human rights
  2. Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being
  3. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable
  4. How can we ensure that A/IS are transparent
  5. How can we extend the benefits and minimize the risks of AI/AS technology being misused

These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.

A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.

UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)

  1. Demand That AI Systems Are Transparent
  2. Equip AI Systems With an “Ethical Black Box”
  3. Make AI Serve People and Planet 
  4. Adopt a Human-In-Command Approach
  5. Ensure a Genderless, Unbiased AI
  6. Share the Benefits of AI Systems
  7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
  8. Establish Global Governance Mechanisms
  9. Ban the Attribution of Responsibility to Robots
  10. Ban AI Arms Race

Drafted by UNI Global Union‘s Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.


References
[1] Asimov, Isaac (1950): Runaround,  in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).

Indoor drone shows are here

A Lucie micro drone takes off from a performer’s hand as part of a drone show. Photo: Verity Studios 2017

2017 was the year where indoor drone shows came into their own. Verity Studios’ Lucie drones alone completed more than 20,000 autonomous flights. A Synthetic Swarm of 99 Lucie micro drones started touring with Metallica (the tour is ongoing and was just announced the 5th highest grossing tour worldwide for 2017). Micro drones are now performing at Madison Square Garden as part of each New York Knicks home game — the first resident drone show in a full-scale arena setting. Since early 2017, a drone swarm has been performing weekly on a first cruise ship. And micro drones performed thousands of flights at Changi Airport Singapore as part of its 2017 Christmas show.

Technologically, indoor drone show systems are challenging. They are among the most sophisticated automation systems in existence, with dozens of autonomous robotic aircraft operating in a safety-critical environment. Indoor drone shows require sophisticated, distributed system control and communications architectures to split up and recombine sensing and computation between aircraft and their off-board infrastructure. Core challenges are not unlike those found in modern systems for manned aviation (e.g., combining auto-pilots, GPS, and air traffic control) and in creating tomorrow’s smart cities (e.g., combining semi-autonomous cars with intelligent traffic lights in a city).

These technological challenges are compounded by another: At least for permanent show installations, these systems need to be operated by non-experts. Two years ago, in one of the first major indoor drone shows, a swarm of micro drones flew over the audience at TED 2016. That system was operated by Verity Studios’ expert engineers. Creating a system that is easy enough to use, and reliable enough, to be operated by show staff is a huge technical challenge of its own. All of Verity’s 2017 shows mentioned above were fully client-operated, which speaks to the maturity that Verity’s drone show system has achieved.

Selection of Verity Studios’ indoor drone shows, from the drone swarm at TED 2016 to 20,000 autonomous indoor drone show flights in 2017 alone.

For my colleagues and me, it is these technological challenges, together with the visual impact of indoor drone shows, that makes these systems so much fun and hugely rewarding to work with.

Creative potential

Creatively, the capabilities of today’s indoor drone show systems barely scratch the surface of the technology’s potential. For centuries, show designers were restricted to static scenes. Curtains were required to hide scene changes from the audience, lest stage hands rushing to move set pieces destroy the magic created by a live show. The introduction of automation to seamlessly move backdrops and other stage elements, followed by the debut of automated lighting to smoothly pan and tilt traditional, stationary illumination were revolutionary.

Drones hold the potential for pushing automation further. The Lucies shown in the images above give a first inkling of the creative potential of flying lights that can be freely positioned in 3D space, appearing at will. Larger drones allow to extend that concept to nearly any object, including the creation of flying characters.

Safety

The most critical challenge for indoor drone show systems is safety. Indoor drone shows feature dozens of drones flying simultaneously and in tight formations, close to crowds of people, in a repeated fashion, in the high-pressure environment of a live show. For example, as part of the currently running New York Knicks drone show, 32 drones perform above 16 dancers, live in front of up to 20,000 people in New York’s Madison Square Garden arena, 44 times per season.

There are really only three ways to safely fly drones at live events.

The first way to achieve safety is the same that keeps commercial aviation safe: System redundancy. Using this approach, Verity Studios’ larger, Stage Flyer drones performed safely on Broadway, completing 398 shows and more than 7,000 autonomous flights, flying 8 times a week, in front of up to 2,000 people for a year, without safety nets. The Stage Flyer drones are designed around redundancy. At least two of each components are used (e.g., two batteries, two flight computers, and a duplicate of each sensor) or existing redundancies are exploited. For example, the Stage Flyer drones have only four propellers and motors, like any quadcopter. However, advanced algorithms that exploit the physics of flight allow these multi-rotor vehicles to fly with less than 4 propellers. The overall design allows these drones to continue to fly in spite of any individual component failure. For example, in one of the last Broadway shows, a Stage Flyer experienced a battery failure. The drone switched into its safety flight mode and landed, and the show continued with 7 instead of 8 drones. This approach to drone safety remains highly unusual — all drones available for purchase today have single points of failure.

Verity Studios drone show, 2017 Event Safety Summit, Rock Lititz. Photo: Verity Studios 2017

The second approach to safety is physical separation. This is how safety is usually achieved for outdoor drone shows: Drones perform over a body of water or some roads are temporarily closed to create a large-enough area without people. For example, the Intel drone show at the Super Bowl was recorded far away from the NRG stadium. In fact, for the Super Bowl, safety went even a step further, also adding “temporal separation” to the physical separation (the drone show was actually pre-recorded days ahead of time, and viewers in the stadium and on TV were only shown a video recording). For indoor drone lightshows, physical separation can be achieved using safety nets.

The third approach to safely flying drones at live events is to make the drones so small that they have high inherent safety. Verity Studios’ Lucie micro drones weigh less than 1.8 ounces or 50 grams (including their flexible hull).

As the continuing string of safety incidents involving drones at live events attests, not everyone takes drone safety seriously. This is why my colleagues and I have worked with aviation experts and leading creatives to summarize best practices in an overview paper: Drone shows – Creative potential and best practices.

So, what’s in store for 2018? The appetite for indoor drone shows is huge, which is why Verity Studios is growing its team. And given the 2017 track record, there is a lot to look forward to — your favorite venue’s ceiling is the limit!

Page 404 of 431
1 402 403 404 405 406 431