‘Earworm melodies with strange aspects’ – what happens when AI makes music
by Kevin Casey
The first full-length mainstream music album co-written with the help of artificial intelligence (AI) was released on 12 January and experts believe that the science behind it could lead to a whole new style of music composition.
Popular music has always been fertile ground for technological innovation. From the electric guitar to the studio desk, laptops and the wah-wah pedal, music has the ability to absorb new inventions with ease.
Now, the release of Hello World, the first entire studio album co-created by artists and AI could mark a watershed in music composition.
Stemming from the FlowMachines project, funded by the EU’s European Research Council, the album is the fruits of the labour of 15 artists, music producer Benoit Carré, aka Skygge, and creative software designed by computer scientist and AI expert François Pachet.
Already Belgian pop sensation Stromae and chart-topping Canadian chanteuse Kiesza have been making waves with the single Hello Shadow.
The single Hello Shadow, featuring Stromae and Kiesza, is taken from the AI-co-written album, Hello World. Video credit – SKYGGE MUSIC
The software works by using neural networks – artificial intelligence systems that learn from experience by forming connections over time, thereby mimicking the biological networks of people’s brains. Pachet describes its basic job as ‘to infer the style of a corpus (of music) and generate new things’.
A musician firstly provides ‘inspiration’ to the software by exposing it to a collection of songs. Once the system understands the style required it outputs a new composition.
‘The system analyses the music in terms of beats, melody and harmony,’ said Pachet, ‘And then outputs an original piece of music based on that style.’
Creative workflow
The design challenge with this software was to make it adapt to the creative workflow of musicians without becoming a nuisance.
‘The core (problem) was how to do that so that (it) takes into account user constraints. Why? Because if you compose music, actually you never do something from scratch from A to Z,’ said Pachet.
He outlines a typical scenario where the AI software generates something and only parts of it are useful but the musician wants to keep it in, drop the rest and generate new sounds using the previous partial output. It’s a complex requirement, in other words.
‘Basically, the main contribution of the project was to find ways to do that, to do that well and to do that fast,’ said Pachet. ‘It was really an algorithmic problem.’ As creative workers driven by intuition, musicians need direct results to maintain their momentum. A clunky tool with ambivalent results would not last long in a creative workflow.
Pachet is satisfied that his technical goal is completed and that the AI will generate music ‘quickly and under user constraints’.
After years of development and refinement, the AI music tool now fits on a laptop, such as to be found in any recording studio, anywhere. In the hands of music producer Carré, the application became the creative tool that built Hello World.
Collaboration
As a record producer, Carré collaborated closely with the artists in the studio to write and produce songs. So, as the resident musical expert, can Carré say if this is a new form of music?
‘It’s not a new form of music,’ he said, ‘It’s a new way to create music.’
Carré said he believes the software could lead to a new era in composition. ‘Every time there is a new tool there is a new kind of compositional style. For this project we can see that there is a new kind of melody that was created.’ He describes this as ‘earworm melodies with strange aspects’.
He also says that the process is a real collaboration between human and machine. The system creates original compositions that are then layered into songs in various forms, whether as a beat, a melody or an orchestration. During the process, artists such as Stromae are actively involved in making decisions about what and how to include the muscial fragments the AI provides.
‘You can recognise all the artists because they have made choices that are their identity, I think,’ said Carré.
Pachet concurs. ‘You know in English you say every Lennon needs a McCartney – so that’s the kind of stuff we are aiming at. We are not aiming at autonomous creation. I don’t believe that’s interesting, I don’t believe it’s possible actually, because we have no clue how to give a computer a sense of agency, a sense that something is going somewhere, (that) it has some meaning, a soul, if you want.’
The album’s title Hello World reflects the expression commonly used the very first time someone runs a new computer program or starts a website as proof that is working. Carré believes that Hello World is just the first step and the software signals the start of a whole new way of composing.
‘Maybe not next year, but in five years there will be a new set of tools that helps creators to make music,’ said Carré.
More info
IntervalZero’s RTX64
Furrion Showcases World’s First Exo-Bionic Racing Mech in Motion and Unveils X1 Mech Racing
How to build a robot – the creative way
Here’s a cute video about how UK-based Rusty Squid designs robots. Rusty Squid is a studio for experimental robotic engineering and design, working within the contemporary arts.
David McGoran, Creative Director says “We explore the design space before committing to sensors and autonomous behaviour. During the design process, we created our own bespoke tools to effectively communicate with engineers, artists and designers. One of the bespoke tools featured in How We Build a Robot is called the Story Machine; we use it for, what we call, ‘Relationship Design’.”
Robots in Depth with Dan Kara
In this episode of Robots in Depth, Per Sjöborg speaks with Dan Kara about his views on robotics, and how a trip to Japan made him start Robotics Trends & RoboBusiness.
Special Tradeshow Coverage for ATX West 2018
Soft, Self-healing Devices Mimic Biological Muscles, Point to Next Generation of Human-like Robotics
MIT Develops Autonomous “Socially Aware” Robot Using Jackal UGV
New technique eases production, customization of soft robotics
A round up of robotics and AI ethics: part 1 principles
This blogpost is a round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. The principles are presented here (in full or abridged) with notes and references but without commentary. If there are any (prominent) ones I’ve missed please let me know.
Asimov’s three laws of Robotics (1950)
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov’s short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.
Murphy and Wood’s three laws of Responsible Robotics (2009)
- A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
- A robot must respond to humans as appropriate for their roles.
- A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.
These were proposed in Robin Murphy and David Wood’s paper Beyond Asimov: The Three Laws of Responsible Robotics [2].
EPSRC Principles of Robotics (2010)
- Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
- Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy.
- Robots are products. They should be designed using processes which assure their safety and security.
- Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
- The person with legal responsibility for a robot should be attributed.
These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.
Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)
I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:
6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
An account of the development of the Asilomar principles can be found here.
The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
- Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
- Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
- Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
- Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
- Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
- Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
- Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results.
See the ACM announcement of these principles here. The principles form part of the ACM’s updated code of ethics.
Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)>
- Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity.
- Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
- Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
- Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI.
- Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control.
- Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society.
- Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed.
- Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
- Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.
An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).
Draft principles of The Future Society’s Science, Law and Society Initiative (Oct 2017)
- AI should advance the well-being of humanity, its societies, and its natural environment.
- AI should be transparent.
- Manufacturers and operators of AI should be accountable.
- AI’s effectiveness should be measurable in the real-world applications for which it is intended.
- Operators of AI systems should have appropriate competencies.
- The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.
This article by Nicolas Economou explains the 6 principles with a full commentary on each one.
Montréal Declaration for Responsible AI draft principles (Nov 2017)
- Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
- Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
- Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
- Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
- Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
- Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
- Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.
The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).
IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)
- How can we ensure that A/IS do not infringe human rights?
- Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being.
- How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable?
- How can we ensure that A/IS are transparent?
- How can we extend the benefits and minimize the risks of AI/AS technology being misused?
These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.
A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.
UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)
- Demand That AI Systems Are Transparent
- Equip AI Systems With an “Ethical Black Box”
- Make AI Serve People and Planet
- Adopt a Human-In-Command Approach
- Ensure a Genderless, Unbiased AI
- Share the Benefits of AI Systems
- Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
- Establish Global Governance Mechanisms
- Ban the Attribution of Responsibility to Robots
- Ban AI Arms Race
Drafted by UNI Global Union‘s Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.
References
[1] Asimov, Isaac (1950): Runaround, in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).
This New Kind of Robot Can Adapt to Physical Damage
Indoor drone shows are here
2017 was the year where indoor drone shows came into their own. Verity Studios’ Lucie drones alone completed more than 20,000 autonomous flights. A Synthetic Swarm of 99 Lucie micro drones started touring with Metallica (the tour is ongoing and was just announced the 5th highest grossing tour worldwide for 2017). Micro drones are now performing at Madison Square Garden as part of each New York Knicks home game — the first resident drone show in a full-scale arena setting. Since early 2017, a drone swarm has been performing weekly on a first cruise ship. And micro drones performed thousands of flights at Changi Airport Singapore as part of its 2017 Christmas show.
Technologically, indoor drone show systems are challenging. They are among the most sophisticated automation systems in existence, with dozens of autonomous robotic aircraft operating in a safety-critical environment. Indoor drone shows require sophisticated, distributed system control and communications architectures to split up and recombine sensing and computation between aircraft and their off-board infrastructure. Core challenges are not unlike those found in modern systems for manned aviation (e.g., combining auto-pilots, GPS, and air traffic control) and in creating tomorrow’s smart cities (e.g., combining semi-autonomous cars with intelligent traffic lights in a city).
These technological challenges are compounded by another: At least for permanent show installations, these systems need to be operated by non-experts. Two years ago, in one of the first major indoor drone shows, a swarm of micro drones flew over the audience at TED 2016. That system was operated by Verity Studios’ expert engineers. Creating a system that is easy enough to use, and reliable enough, to be operated by show staff is a huge technical challenge of its own. All of Verity’s 2017 shows mentioned above were fully client-operated, which speaks to the maturity that Verity’s drone show system has achieved.
For my colleagues and me, it is these technological challenges, together with the visual impact of indoor drone shows, that makes these systems so much fun and hugely rewarding to work with.
Creative potential
Creatively, the capabilities of today’s indoor drone show systems barely scratch the surface of the technology’s potential. For centuries, show designers were restricted to static scenes. Curtains were required to hide scene changes from the audience, lest stage hands rushing to move set pieces destroy the magic created by a live show. The introduction of automation to seamlessly move backdrops and other stage elements, followed by the debut of automated lighting to smoothly pan and tilt traditional, stationary illumination were revolutionary.
Drones hold the potential for pushing automation further. The Lucies shown in the images above give a first inkling of the creative potential of flying lights that can be freely positioned in 3D space, appearing at will. Larger drones allow to extend that concept to nearly any object, including the creation of flying characters.
Safety
The most critical challenge for indoor drone show systems is safety. Indoor drone shows feature dozens of drones flying simultaneously and in tight formations, close to crowds of people, in a repeated fashion, in the high-pressure environment of a live show. For example, as part of the currently running New York Knicks drone show, 32 drones perform above 16 dancers, live in front of up to 20,000 people in New York’s Madison Square Garden arena, 44 times per season.
There are really only three ways to safely fly drones at live events.
The first way to achieve safety is the same that keeps commercial aviation safe: System redundancy. Using this approach, Verity Studios’ larger, Stage Flyer drones performed safely on Broadway, completing 398 shows and more than 7,000 autonomous flights, flying 8 times a week, in front of up to 2,000 people for a year, without safety nets. The Stage Flyer drones are designed around redundancy. At least two of each components are used (e.g., two batteries, two flight computers, and a duplicate of each sensor) or existing redundancies are exploited. For example, the Stage Flyer drones have only four propellers and motors, like any quadcopter. However, advanced algorithms that exploit the physics of flight allow these multi-rotor vehicles to fly with less than 4 propellers. The overall design allows these drones to continue to fly in spite of any individual component failure. For example, in one of the last Broadway shows, a Stage Flyer experienced a battery failure. The drone switched into its safety flight mode and landed, and the show continued with 7 instead of 8 drones. This approach to drone safety remains highly unusual — all drones available for purchase today have single points of failure.
The second approach to safety is physical separation. This is how safety is usually achieved for outdoor drone shows: Drones perform over a body of water or some roads are temporarily closed to create a large-enough area without people. For example, the Intel drone show at the Super Bowl was recorded far away from the NRG stadium. In fact, for the Super Bowl, safety went even a step further, also adding “temporal separation” to the physical separation (the drone show was actually pre-recorded days ahead of time, and viewers in the stadium and on TV were only shown a video recording). For indoor drone lightshows, physical separation can be achieved using safety nets.
The third approach to safely flying drones at live events is to make the drones so small that they have high inherent safety. Verity Studios’ Lucie micro drones weigh less than 1.8 ounces or 50 grams (including their flexible hull).
As the continuing string of safety incidents involving drones at live events attests, not everyone takes drone safety seriously. This is why my colleagues and I have worked with aviation experts and leading creatives to summarize best practices in an overview paper: Drone shows – Creative potential and best practices.
So, what’s in store for 2018? The appetite for indoor drone shows is huge, which is why Verity Studios is growing its team. And given the 2017 track record, there is a lot to look forward to — your favorite venue’s ceiling is the limit!
CANADA: Deloitte future jobs report recommends basic income
#251: Open Source Prosthetic Leg, with Elliott Rouse
In this episode, Audrow Nash interviews Elliott Rouse, Assistant Professor at the University of Michigan, about an open-source prosthetic leg—that is a robotic knee and ankle. Rouse’s goal is to provide an inexpensive and capable platform for researchers to use so that they can work on prostheses without developing their own hardware, which is both time-consuming and expensive. Rouse discusses the design of the leg, the software interface, and the project’s timeline.
Elliott Rouse
Elliott Rouse is an Assistant Professor in the Mechanical Engineering Department at the University of Michigan, where he directs the Neurobionics Lab. The vision of his group is to discover the fundamental science that underlies human joint dynamics during locomotion and incorporate these discoveries in a new class of wearable robotic technologies. The Lab uses technical tools from mechanical and biomedical engineering applied to the complex challenges of human augmentation, physical medicine, rehabilitation and neuroscience. Dr. Rouse and his research have been featured at TED, on the Discovery Channel, CNN, National Public Radio, Wired Magazine UK, Business Insider, and Odyssey Magazine.
Links