The way animals move has yet to be matched by robotic systems. This is because biological systems exploit compliant mechanisms in ways their robotic cousins do not. However, recent advances in soft robotics aim to address these issues.
The stereotypical robot gait consists of jerky, uncoordinated and unstable movements. This is in stark contrast to the graceful, fluid movements seen throughout the animal kingdom. In part, this is because robots are built of rigid often metallic links, unlike their biological cousins which use a mixture of hard and soft materials.
A key challenge for soft and hybrid hard-soft robotics is control. The use of rigid materials means it is possible to build relatively simple mathematical models of the system, which can be used to find control laws. Without rigid materials, the model balloons in complexity, leading to systems which cannot feasibly be used to derive control laws.
One possible approach is to offload some of the responsibility for producing a behaviour from the controller to the body of the system. Some examples of systems that take this approach include the Lipson-Jaeger gripper, various dynamic walkers, and the 1-DOF swimming robot Wanda. This offloading of responsibility from the controller to the body is called ‘Morphological Computation.’
There are a number of mechanisms in which the body can reduce the burden placed on a controller: it can limit the range of available behaviours, reducing the need for a complex model of the system[1] structure the sensory data in a way that simplifies state observation[2], or it can be used as an explicit computational resource.
The idea of an arm or leg as a computer may strike people as odd. The clean lines of a Macbook look much different than our wobbly appendages. In what sense, then, is it reasonable to claim that such systems can be used as computers?
Computational capacity
To make the concept clear, it is necessary to first introduce the concept of computational capacity. In a digital computer, transistors compute logical operations such as AND and XOR, but computation is possible via any system which can take inputs to outputs.
For example, we could use a system which classifies numbers as either positive or negative as a basic building block. More complex computations can be achieved by connecting these basic units together.
Of course, not all choices of mapping are equally useful. In some cases, computations that we wish to carry out may not be possible if we have chosen the wrong building block. The computational capacity of a system is a measure of how complex a calculation we can perform by combining our basic computational unit.
To make this more concrete, it is helpful to introduce perceptrons: the forebears of today’s deep learning systems. A perceptron takes a set number of inputs and produces either a 1 or 0 as an output. To calculate the output for a given input, we follow a simple process:
Multiply each input x_i by its corresponding weight w_i
Sum the weighted inputs
Output 1 if the sum is above a threshold and output 0 otherwise.
In order to make a perceptron perform a specific computation, we have to find the corresponding set of weights.
If we consider the perceptron as our basic building block for computation, can we ask what kinds of computation we can perform?
It has been shown that perceptrons are limited to the computation of linear functions; a perceptron cannot, for example, correctly learn to compute the XOR function.[3]
To overcome this limitation, it is necessary to expand the complexity of the computational system. In the case of the perceptron, we can add “hidden” layers, turning the perceptron into a neural network. Neural networks with enough hidden layers are in theory capable of computing almost any function which depends only upon its input at the current time.[4]
If we desire a system that can perform computations that depend on prior inputs (i.e. have memory), then we need to add recurrent connections, which connect a node to itself. Recurrent neural networks have been shown to be Turing complete, which means they could, in theory, be used as a basis for a general purpose computer. [5]
FIGURE (FFNN and RNN)
These considerations give us a set of criteria by which we can begin to assess the computational capacity of a system — we can classify the computational capacity by the amount of non-linearity and memory that a system has.
The capacity of bodies
Returning to the explicit use of a body as a computational structure, we can begin to consider the capacity of a body. To start with, we need to define an input and output for our computer.
In most cases, the body of a robot will contain both actuators and sensors. If we limit ourselves to systems with a single actuator, then the natural definitions would be to take the actuator as the input and the sensors as the output. To simplify the output, we can take a weighted sum of the sensor readings; we know the limitations of the perceptron that this will not add either memory or non-linearity to our system.
FIGURES (OCTOPUS SYSTEM)
Given such a setup, we can then test the ability of the system to perform certain computations. What kind of computations can this system perform?
To answer this question, Nakajima et al.[6] used a silicone arm, inspired by an octopus tentacle. The arm was actuated by a single servo motor and contained a number of stretch sensors embedded within it.
Amazingly, this system was shown to be capable of computing a number of functions that required both non-linearity and memory. For example, the arm was shown to be capable of acting as a parity bit checker. Given a sequence of inputs (either 1 or 0), the system would output a 1 if there was an even number of 1s in the input and 0 otherwise. Such a task requires both memory of prior inputs and non-linearity. As the readout cannot add either of these, we must conclude that the body itself has added this capacity; in other words, the body contributes computational capacity to the system.
Prospects and outlook
A number of systems which exploit the explicit computational capacity of the body have already been built. For example, the Kitty robot [7] uses its compliant spine to compute a control signal. Different behaviours can be achieved by adjusting the weights of the readout, and the controller is robust to certain perturbations.
As a next step, we are investigating the role of adaptive bodies. Many biological systems are capable of not only changing their control but also adjusting the properties of their bodies. Finding the right morphology can simplify control as discussed in the introduction. However, it will also affect the computational capacity of the body. We are investigating the connection between the computational capacity of a body and its behaviour.
[1] Tedrake, Russ, Teresa Weirui Zhang, and H. Sebastian Seung. “Learning to walk in 20 minutes.” Proceedings of the Fourteenth Yale Workshop on Adaptive and Learning Systems. Vol. 95585. 2005.
[2] Lichtensteiger, Lukas, and Peter Eggenberger. “Evolving the morphology of a compound eye on a robot.” Advanced Mobile Robots, 1999.(Eurobot’99) 1999 Third European Workshop on. IEEE, 1999.
[3] Minsky, Marvin, and Seymour Papert. “Perceptrons.” (1969).
[5] Siegelmann, Hava T., and Eduardo D. Sontag. “On the computational power of neural nets.” Journal of computer and system sciences 50.1 (1995): 132-150.
[6] Nakajima, Kohei, et al. “Exploiting short-term memory in soft body dynamics as a computational resource.” Journal of The Royal Society Interface 11.100 (2014): 20140437.
[7] Zhao, Qian, et al. “Spine dynamics as a computational resource in spine-driven quadruped locomotion.” Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013.
Drone company Atmos UAV has launched Marlyn, a lightweight drone which flies automatically, effortlessly and at high wind speeds. One of the first customers that signed up is Skeye, Europe’s leading unmanned aircraft data provider. This new technology allows industry professionals around the world to map the surface 10 times faster and guarantees no more drone crashes.
“With her unique properties, Marlyn allows us to tackle even our most challenging jobs,” says Pieter Franken, co-founder of Skeye, one of Europe’s leading unmanned aircraft data providers. He continues: “We expect time savings of up to 50% and moreover save a huge amount of our resources and equipment.” Marlyn can cover 1 km² in half an hour with a ground sampling distance of 3 cm.
“We are very excited to work together with Skeye and to have the opportunity to implement their operational expertise in this promising project,” Sander Hulsman, CEO of Atmos UAV adds. “Marlyn is all about making aerial data collection safer and more efficient, allowing professional users across all industries to access the skies, enabling them to focus more on analysing the actual information and improving their business effectiveness.”
Mapping made easy
With Marlyn, mapping jobs consist of four easy steps. First, a flight plan is generated based on the required accuracy and the specified project area. Secondly, the drone starts its flight and data collection by a simple push of a button. Thirdly, after Marlyn has landed at the designated spot, the captured data is automatically organised and processed by image processing software of choice. Finally, a detailed analysis can be done to provide actionable insights.
About Atmos UAV
Atmos UAV is a high-tech start-up that designs and manufactures reliable aerial observation and data gathering solutions for professional users. It all originated from a project at Delft University of Technology. With the support of its faculty of Aerospace Engineering, it grew into the fast-growing spin-off company Atmos UAV. The company specializes in land surveying, mining, precision agriculture, forestry and other mapping related applications. Atmos UAV is currently hiring to accommodate its rapid expansion.
In 1985, a twenty-two year old Garry Kasparov became the youngest World Chess Champion. Twelve years later, he was defeated by the only player capable of challenging the grandmaster, IBM’s Deep Blue. That same year (1997), RoboCup was formed to take on the world’s most popular game, soccer, with robots. Twenty years later, we are on the threshold of the accomplishing the biggest feat in machine intelligence, a team of fully autonomous humanoids beating human players at FIFA World Cup soccer.
Many of the advances that have led to the influx of modern autonomous vehicles and machine intelligence are the result of decades of competitions. While Deep Blue and AlphaGo have beat the world’s best players at board games, soccer requires real-world complexities (see chart) in order to best humans on the field. This requires RoboCup teams to combine a number of mechatronic technologies within a humanoid device, such as real-time sensor fusion, reactive behavior, strategy acquisition, deep learning, real-time planning, multi-agent systems, context recognition, vision, strategic decision-making, motor control, and intelligent robot control.
Professor Daniel Lee of University of Pennsylvania’s GRASP Lab described the RoboCup challenges best, “Why is it that we have machines that can beat us in chess or Jeopardy but we can beat them in soccer? What makes it so difficult to embody intelligence into the physical world?” Lee explains, “It’s not just the soccer domain. It’s really thinking about artificial intelligence, robotics, and what they can do in a more general context.”
RoboCup has become so important that the challenge of soccer has now expanded into new leagues that focus on many commercial endeavors from social robotics, to search & rescue, to industrial applications. These leagues have a number of subcategories of competition with varying degrees of difficulty. In less than two months, international teams will convene in Nagoya, Japan for the twenty-first games. As a preview of what to expect, let’s review some of last year’s winners. And just maybe it could give us a peek at the future of automation.
RoboCup Soccer
While Iran’s human soccer team is 28th in the world, their robot counterparts (Baset Pazhuh Tehran) won 1st place in the AdultSize Humanoid competition. Baset’s secret sauce is its proprietary algorithms for motion control, perception, and path planning. According to Baset’s team description paper, the key was building a “fast and stable walk engine” based upon the success of past competitions. This walk engine was able to combine “all actuators’ data in each joint, and changing the inverse and forward kinematics” to “avoid external forces affecting robot’s stability, this feature plays an important role to keep the robot standing when colliding to the other robots or obstacles.” Another big factor was their goalkeeper that used a stereo vision sensor to detect incoming plays and win the competition by having “better percepts of goal poles, obstacles, and the opponent’s goalkeeper. To locate each object in real self-coordinating system.” The team is part of a larger Iranian corporation, Baset, that could deploy this perception in the field. Baset’s oil and gas clients could benefit from better localization techniques and object recognition for pipeline inspections and autonomous work vehicles. If by 2050 RoboCup’s humanoids will be capable of playing humans in soccer, one has to wonder if Baset’s mechanical players will spend their offseason working in the Arabian peninsula?
RoboCup Rescue
In 2001 the RoboCup organization added simulated rescue to the course challenge, paving the way for many life-saving innovations already being embraced by first responders. The course starts with a simulated earthquake environment whereby the robot performs a search and rescue mission lasting 20 minutes. The skills are graded by overcoming a number of obstacles that are designed to assess the robot’s autonomous operation, mobility, and object manipulation. Points are given by the number of victims found by the robot, details about the victims, and the quality of the area mapped. In 2016, students from the King Mongkut’s University of Technology North Bangkok won first place with their Invigorating Robot Activity Project (or iRAP).
Similar to Baset, iRap’s success is largely based upon solving problems from previous contests where they placed consistently in the top tier. The team had a total of four robots: one autonomous robot, two tele-operative robots, and one aerial drone. Each of the robots had multiple sensors related to providing critical data, such as CO2 levels, temperature, positioning, 2D mapping, images, and two-way communications. iRap’s devices managed to navigate with remarkable ease the test environment’s rough surfaces, hard terrains, rolling floor, stairs, and inclined floor. The most impressive performer was the caged quadcopter using enhanced sensors to localize itself within an outdoor search perimeter. According to the team’s description paper, “we have developed the autonomously outdoor robot that is the aerial robot. It can fly and localize itself by GPS sensor. Besides, the essential sensors for searching the victim.” It is interesting to note that the Thai team’s design was remarkably similar to Flyability’s Gimball that won first place in the UAE’s 2015 Drones for Good Competition. Like the RoboCup winner, the Gimball was designed specifically for search & rescue missions using a lightweight carbon fiber cage.
As RoboCup contestants push the envelope of navigation mapping technologies, it is quite possible that the 2017 fleet could develop subterranean devices that could actively find victims within minutes of being buried by the earth.
RoboCup @Home
The home, like soccer, is one of the most chaotic conditions for robots to operate successfully. It is also one of the biggest areas of interest for consumers. Last year, RoboCup @Home celebrated its 10th anniversary by bestowing the top accolade to Team-Bielefeld (ToBI) of Germany. ToBi built a humanoid-like robot that was capable of learning new skills through natural language within unknown environments. According to the team’s paper, “the challenge is two-fold. On the one hand, we need to understand the communicative cues of humans and how they interpret robotic behavior. On the other hand, we need to provide technology that is able to perceive the environment, detect and recognize humans, navigate in changing environments, localize and manipulate objects, initiate and understand a spoken dialogue and analyse the different scenes to gain a better understanding of the surrounding.” In order to achieve these ambitious objectives the team created a Cognitive Interaction Toolkit (CITK) to support an “aggregation of required system artifacts, an automated software build and deployment, as well as an automated testing environment.”
Infused with its proprietary software the team’s primary robot, the Meka M1 Mobile Manipulator (left) demonstrated the latest developments in human-robot-interactions within the domestic setting. The team showcased how the Meka was able to open previously shut doors, navigate safely around a person blocking its way, and recognize and grasp many household objects. According to the team, “the robot skills proved to be very effective for designing determined tasks, including more script-like tasks, e.g. ’Follow-Me’ or ’Who-is-Who’, as well as more flexible tasks including planning and dialogue aspects, e.g. ’General-PurposeService-Robot’ or ’Open-Challenge’.”
RoboCup @Work
The @Work category debuted in 2016 with the German team from the Leibniz Universität Hannover (LUHbots)winning first place. While LUHbots’ hardware was mostly off the shelf parts (a mobile robot KUKA youBot), the software utilized a number of proprietary algorithms. According to the paper, “in the RoboCup we use this software e.g. to grab objects using inverse kinematics, to optimize trajectories and to create fast and smooth movements with the manipulator. Besides the usability the main improvements are the graph based planning approach and the higher control frequency of the base and the manipulator.” The key to using this approach within a factory setting is its robust object recognition. The paper explains, “the robot measures the speed and position of the object. It calculates the point and time where the object reaches the task place. The arm moves above the calculated point. Waits for the object and accelerates until the arm is directly above the moving-object with the same speed. Overlapping the down movement with the current speed until gripping the object. The advantage of this approach is that while the calculated position and speed are correct every orientation and much higher objects can be gripped.”
Similar to other finalists, LUHbots’ object recognition software became the determining factor to its success. RoboCup’s goal of playing WorldCup soccer with robots may seem trivial, but its practice is anything but meaningless. In each category, the advances developed on the field of competitive science are paying real dividends on a global scale across many industries.
In the words of the RoboCup mission statement: “The ultimate goal is to ‘develop a robot soccer team which beats the human world champion team.’ (A more modest goal is ‘to develop a robot soccer team which plays like human players.’) Needless to say, the accomplishment of the ultimate goal will take decades of effort. It is not feasible with current technologies to accomplish this in the near future. However, this goal can easily lead to a series of well-directed subgoals. Such an approach is common in any ambitious or overly ambitious project. In the case of the American space program, the Mercury project and the Gemini project, which manned an orbital mission, were two precursors to the Apollo mission. The first subgoal to be accomplished in RoboCup is ‘to build real and software robot soccer teams which play reasonably well with modified rules.’ Even to accomplish this goal will undoubtedly generate technologies, which will impact a broad range of industries.”
* Editor’s note: thank you to Robohub for providing a twenty-year history of RoboCup videos.
A U.S. Court of Appeals has struck down the Federal Aviation Administration’s drone registration requirement for recreational operators. The federal court ruled that the registration system violates the 2012 FAA Modernization and Reform Act, which does not grant the FAA the authority to regulate hobby aircraft. More than 820,000 people have registered their drones with the FAA since the requirement was implemented in 2015. (Recode)
In a statement, Senator John Hoeven (R-ND) urges the Department of Defense to develop an air traffic management system for drones. (Press Release)
In a speech at the Special Operations Forces Industry Conference, General Raymond Thomas said that armed ISIS drones were the “most daunting problem” of the past year. (DefenseNews)
At the Indianapolis Star, Ryan Martin writes that police departments in Indiana are pushing back against a state law that prohibits police from using drones for surveillance.
At War is Boring, David Axe looks at how the U.S. Air Force is using robotic F-16 fighters to test the teaming of manned and unmanned aircraft in combat.
At the Intercept, Shuaib Almosawa and Murtaza Hussain write that family members of individuals killed in a U.S. drone strike are disputing claims that the men were members of al-Qaeda.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and ETH Zurich in Switzerland are developing systems that enable drones to maintain the exact framing of a scene specified by the operator. (Engadget)
The U.S. Naval Research Laboratory is working to develop solar-powered unmanned aircraft that can remain aloft for extended periods. (R&D Magazine)
An official at the U.S. Army’s Tank Automotive Research, Development and Engineering Center has said that the service’s next tank will have the ability to operate aerial drones. (War is Boring)
Researchers at the Georgia Institute of Technology are developing drone swarms in which the aircraft are protected by a virtual bumper area that prevents them from crashing into each other. (Engadget)
Meanwhile, the Air Force Research Laboratory presented a number of unmanned aircraft projects at the second biennial Department of Defense Lab Day, including the Ninja counter-drone system and the Low Cost Attritable Aircraft Technology Program. (Press Release)
The Unmanned Mine Counter Measure Integrated System, an unmanned undersea vessel built by Russian shipyard SNSZ, has been declared fully operational with the Kazakh Navy. (Press Release)
Singapore-based firm Zycraft announced that its Independent USV completed a test in which it was deployed in the South China Sea for a continuous period of 22 days. (IHS Jane’s 360)
The U.S. Air Force Research Laboratory is developing small unmanned aircraft that can be launched from AC-130 gunships or similar manned aircraft. (IHS Jane’s 360)
During a trial operation off the coast of San Clemente Island, a U.S. Navy MQ-8B Fire Scout drone helicopter served as a laser designation platform for a Hellfire missile fired from an MH-60S Knighthawk helicopter. (Press Release)
Drones at Work
Japanese fighter jets were scrambled in response to a drone operated by a Chinese vessel over disputed waters in the East China Sea. (Nikkei Asian Review)
A Jordanian Air Force F-16 fighter jet shot down a drone that was flying near the Syrian border. (IHS Jane’s 360)
Officials in Elk City, Oklahoma enlisted students from Embry-Riddle Aeronautical University to survey damage from a recent tornado with a drone. (News9)
The West Australia Police air wing is planning to test drones for a range of operations including investigations and search and rescue missions. (The West Australian)
Researchers using drones to study narwhals found that the animals appear to use use their long, sharp tusks to stun their prey. (New Atlas)
U.S. startup DroneSeed has obtained FAA approval to operate multiple drones carrying agricultural payloads. (Unmanned Aerial Online)
Police in Brick Township, New Jersey have purchased a drone for search and rescue and scene documentation. (App.com)
As part of an inauguration ceremony for an unmanned aircraft runway, Virginia governor Terry McAuliffe flew in a remotely piloted airplane. (Press Release)
The Israeli Air Force is planning to replace its Sea Scan manned maritime patrol aircraft with the IAI Heron 1, a medium-altitude long-endurance drone. (UPI)
Industry Intel
DroneWERX, a new initiative by the U.S. Special Operations Command and Strategic Capabilities Office, will focus on rapidly acquiring and fielding new drones and robots, swarms, and machine learning and artificial intelligence technologies for special operations. (DefenseNews)
The U.S. Air Force awarded General Atomics Aeronautical Systems a $400 million contract for 36 MQ-9 Reaper aircraft. (DoD)
The U.S. Navy awarded Northrop Grumman Systems a $303.9 million contract modification for three Lot 2 MQ-4C Triton unmanned aircraft and associated equipment. (DoD)
Silverstone, the British racing circuit, has reportedly contracted Drone Defence to provide counter-drone systems for the Formula 1 race in July. (The Sun)
The University of Aarhus in Denmark awarded YellowScan a $114,316 contract for a UAS LiDAR solution for surveying. (Ted)
Saudi Arabia launched the Saudi Arabian Military Industries, a state-owned military industrial company that will manufacture drones, as well as a variety of other military systems and equipment. (Reuters)
For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.
In a three-year competition, five international teams competed to develop a robot for routine-, inspection- and emergency operations on oil & gas sites. Frequently, gas leaks on oil drilling rigs can cause an increased risk to safety and the environment.
The acronym ARGOS stands for Autonomous Robot for Gas and Oil Sites, which suggests that the robot independently performs assigned tasks. If necessary, an operator can intervene at any time via a satellite-based connection from land and take control of the robot.
The robot, developed by taurob GmbH together with TU Darmstadt, can read pointer instruments, fill level displays as well as valve positions using cameras and laser scanners. It can measure temperatures and gas concentrations, detect abnormal noises, obstacles and people around them, and safely manoeuvre on wet stairs. Adverse environmental conditions such as heavy rain, extreme temperatures and wind speeds do not pose a problem.
“Our robot is also the first fully automated inspection robot in the world that can be used safely in a potentially explosive atmosphere,” says Dr Lukas Silberbauer, who together with his colleague Matthias Biegl, founded the company taurob in 2010. The reason behind it, the robot is already fully ATEX certified so that it doesn’t trigger an explosion while operating in potentially explosive gases.
The Austrian company could capitalize on their knowledge obtaining this certification during their first project: a robot for firefighters.
“When we heard about Total’s contest, we immediately realized that this would be a great opportunity for us,” says Matthias Biegl.
Total has announced that it will use the new robots starting from 2020 on its oil drilling rigs.
The project was supported by FFG (Österreichische Forschungsförderungsgesellschaft) in the context of EUROSTARS (Co-financed by the EU).
For decades robotic exoskeletons were the subject of science fiction novels and movies. But in recent years, exoskeleton technology has made huge progress towards reality and exciting research projects and new companies have surfaced.
Typical applications of today’s exoskeletons are stroke therapy or support of users with a spinal cord injury, or industrial applications, such as back support for heavy lifting or power tool operation. And while the field is growing quickly, it is currently not easy to get involved. Learning materials or exoskeleton courses or classes are not widely available yet. This has made it difficult, as learning about exoskeletons is not possible by theory alone, but ideally, involves practical hands-on experience (feel it understand it). Unfortunately, the necessary exoskeleton hardware is expensive and not usually available to students or hobbyists wanting to explore the field.
This is the motivation behind the EduExo kit, a 3D-printable, Arduino-powered robotic exoskeleton now on Kickstarter. The goal of the project is to make exoskeleton technology available, affordable and understandable for anyone interested.
The EduExo is a robotic exoskeleton kit that you assemble and program yourself. The kit contains all the hardware you need to build an elbow exoskeleton. An accompanying handbook contains a tutorial that will guide you through the different assembly steps. In addition, the handbook provides background information on exoskeleton history, functionality and technology to offer a well-rounded introduction to the field.
The kit is being developed for the following users:
High school and college students who want to learn about robotic exoskeleton technology to prepare themselves for a career in exoskeletons and wearable robotics.
Makers and hobbyists who are looking for an interesting and entertaining project in a fascinating field.
Teachers and professors who want to set up exoskeleton courses or labs. The EduExo classroom set provides additional teaching material to facilitate the preparation of a course.
The main challenge of the project was to design exoskeleton hardware that is both cheap enough to be affordable for a high school student, but still complex enough to be an appropriate learning platform that teaches relevant and state of the art exoskeleton technology.
To lower the price, mostly off-the-shelf components are used. The hardware is a simple one degree of freedom design for the elbow joint. The EduExo box comes fully equipped and contains the handbook, exoskeleton structure, preassembled cuffs, an Arduino Microcontroller to control the device, a force sensor (and amplifier) to measures the human-robot interaction, a servo motor for the actuation and all the small parts needed to assemble and wire the exoskeleton.
The exoskeleton’s structure is 3D printable and users with access to a 3D printer can produce the parts themselves. Therefore, a maker edition is available for people who prefer doing everything themselves.
The software required to program the exoskeleton and to create computer games can be downloaded for free (Arduino IDE to program the Microcontroller and the Unity 3D game engine).
Despite the comparatively simple design, the EduExo will teach its users many interesting aspects of exoskeleton technology. This includes how the mechanical design resembles the human anatomy, how to connect the sensors and the electronics board, and how to design and program an exoskeleton control system. Further, it explains how to connect the exoskeleton to a computer and learn how to use it as a haptic device in combination with a self-made virtual reality simulation or computer game.
A ‘muscle control’ extension is offered separately that explains how to measure the exoskeleton user’s muscle activity and use it to control the device.
The development of the kit is currently in its final phase and it is already possible to order it through the ongoing Kickstarter campaign. Shipping is scheduled for August 2017.
Additional information can be found on the EduExo website: www.eduexo.com
A record number of teams submitted beautiful robot-created artwork for the second year of this 5-year worldwide competition. In total, there were 38 teams from 10 countries who submitted 200 different artworks!
Winners were determined based on a combination of public voting (over 3000 people with a Facebook account), judges consisting of working artists, critics, and technologists, and by how well the team met the spirit of the competition—that is, to create something beautiful using a physical brush and robotics, and to share what they learned with others. Learn more about the goals of the contest and its rules here.
Teams are encouraged to hang on to their artwork as there will be a physical exhibition of robotic-created artwork following next year’s competition (Summer 2018). This exhibition, most likely in Seattle, WA, will showcase winners of the 2017 and 2018 competition. The goal is to test Andy Warhol’s theory that “You know it’s ART, when the check clears.”
… and now, for the team winners of the 2017 Robot Art Competition:
This project from Columbia shows a high level of skill with brushstrokes. This, along with some deep learning algorithms, produces some lovely paintings from sources or scratch. When they used a photograph as the source they were able to create plenty of variation from the original and using a fluid medium to produce an atmospheric and open-ended visual experience. Much of their work had a painterly and contemporary presentation.
2nd Place—$25,000—CMIT ReART, Kasetsart University (Thailand)
Artists program this robot brushstroke by stroke, using a haptic recording system that generates volumes of data about the position of the brush and the forces being exerted. When re-played, reART will generate a perfect reproduction of the original strokes. Haptic recording and playback allows for remarkably high-quality inkbrush drawings
One of the aspects of a success commercial artist is to know their market. In this case, the students chose to paint the popular and recently deceased Thai King and therefore were able to get many students to vote for their team.
All of this teams offerings are important. They are aiming at an interpretation of the optical properties of oil paint and applying them to deep learning. Spontaneous paint, “mosaicing” of adjacent tones, layering effects and the graphical interplay between paint strokes of varying textures, are all hand/eye, deeply neurally sophisticated aspects of oil painting that this team is trying to evince together with a robot.
This team also won the $5,000 prize for technical contribution.
4th Place $6000—e-David—University of Konstanz (Germany)
Using software that enables a collaboration between a human artist and roboticist, e-David mimics closely the approach a human painter would work on the canvas. An accompanying academic paper goes into deep detail about their approach to the project.
5th Place $4000—JACKbDU—New York University Shanghai (China)
Clean lines, interesting abstractions, both familiar and abstract subjects made me appreciate the overall body. In particularly, purely from aesthetics, these were found to be most compelling.
6th Place $2000—HEARTalion—Halmstad University (Sweden)
If this body of work was exhibited at a gallery and I was told that the artist aimed to capture emotion through color, composition, and textures—I would buy (says one of our professional judges). The bold brush strokes, cool or warm templates to match the emotional quality expressed, all made sense—but felt alive. Loved them.
Loved the composition of both of these pieces. Both pieces bring an emotive calm with just a few colors, technique, and simplicity. Gem-style optical sketch, intimate scale, good formal balance, medium application and value panels.
This project uses the precision of a robot to take crude brush dabs to astonishing levels. By incorporating 3d scans into its image generation, this bot operates with a complete understanding of its subject; much like a human painter would have someone sitting for them. The team was kind enough to publish their 3d models and source code, so others can learn from and build off of their work.
9th Place $2000—CARP—Worcester Polytechnic Institute (USA)
Very good line. Lots of space in this drawing and central form is simultaneously dense and traversable. Good architectural reference in clear tooling, but enlivened by sumo type or Franz Kline strokes. If the shapes can continue to be explored while maintaining the hand/machine balance, this will remain a strong venue.
Slightly synesthetic. Beautiful and balanced fusion of technical and handmade, crystalline and organic.
10th Place $2000—BABOT—Massachusetts Institute of Technology (USA)
MIT’s BABOT was able to produce this and other inspiring vistas.
In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities.
A wide variety of research and innovation themes are represented in the new projects: from healthcare via transportation, industrial- and logistics robotics to events media production using drones. Some deal with complex safety matters on the frontier where robots meet people, to ensure that no one comes to harm. Others will create a sustainable ecosystem in the robotics community, setting up common platforms supporting robotics development. One project deals exclusively with the potentially radical changes facing society with the rise of new autonomous technologies. The projects are either helping humans in their daily lives at home or at work, collaborating with humans to help them with difficult, strenuous tasks, or taking care of dangerous tasks, reducing the risk to humans.
The research and innovation projects focus on a wide variety of Robotics and Autonomous Systems and capabilities, such as navigation, human-robot interaction, recognition, cognition and handling. Many of these abilities can be transferable to other fields as well.
Advancing Anticipatory Behaviors in Dyadic Human-Robot Collaboration: An.Dy
Objective
Obj1. ANDY will develop the ANDYSUIT, a wearable technology for monitoring humans involved in whole-body physical interaction tasks. Obj2. Based on the ANDYSUIT, ANDY will generate ANDYDATASET, a collection of motion and force captures of humans involved in and collaboration tasks. Obj3. From ANDYDATASET, ANDY will develop ANDYMODEL, a set of models to describe human and robot behaviour when engaged in collaborative tasks. Obj4. With ANDYSUIT and ANDYMODEL, ANDY will develop ANDYCONTROL, a reactive and predictive control strategy for human-robot physical collaboration.
Expected impact
Impact on manufacturing domain: ANDY technologies support this objective in two ways: (1) by increasing productivity through more effective production and service processes in which the strength of humans and robots are optimally combined, and (2) by maintaining workers health until the age of retirement including reduced costs for health care and compensation. Impact on healthcare: the ANDYSUIT will open a completely new field for methodological analysis with the possibility of monitoring patients also outside the clinics.
Partners
FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
INŠTITUT JOŽEF ŠTEFAN
DEUTSCHES ZENTRUM FÜR LUFT- UND RAUMFAHRT
XSENS TECHNOLOGIES BV
IMK AUTOMOTIVE GMBH
OTTO BOCK HEALTHCARE GMBH
ANYBODY TECHNOLOGY A/S
Coordinator: Francesco Nori
francesco.nori@iit.it
http://iron76.github.io
Instituto Italiano di Tecnologia – iCub Facility
Via Morego, 30
16163 Genova, Italy
Phone: (+39) 010 71 781 420
Fax: +39 010 71 70 817
Twitter: @AnDy_H2020
Google auto-complete controversy: The algorithm directed people looking for information about the Holocaust to neo-Nazi websites
These errors are not primarily caused by problems in the data that can make algorithms discriminatory, or their inability to improvise creatively. No, they stem from something more fundamental: the fact that algorithms, even when they are generating routine predictions based on non-biased data, will make errors. To err is algorithm.
The costs and benefits of algorithmic decision-making
We should not stop using algorithms simply because they make errors.[2] Without them, many popular and useful services would be unviable.[3] However, we need to recognise that algorithms are fallible, and that their failures have costs. This points at an important trade-off between more (algorithm-enabled) beneficial decisions and more (algorithm-caused) costly errors. Where lies the balance?
Economics is the science of trade-offs, so why not think about this topic like economists? This is what I have done ahead of this blog, creating three simple economics vignettes that look at key aspects of algorithmic decision-making.[4] These are the key questions:
Risk: when should we leave decisions to algorithms, and how accurate do those algorithms need to be?
Supervision: How do we combine human and machine intelligence to achieve desired outcomes?
Scale: What factors enable and constrain our ability to ramp-up algorithmic decision-making?
The two sections that follow give the gist of the analysis and its implications. The appendix at the end describes the vignettes in more detail (with equations!).
in an information rich world, attention becomes a scarce resource.
This applies to organisations as much as it does to individuals.
The ongoing data revolution risks overwhelming our ability to process information and make decisions, and algorithms can help address this. They are machines that automate decision-making, potentially increasing the number of good decisions that an organisation can make.[5] This explains why they have taken-off first in industries where the volume and frequency of potential decisions goes beyond what a human workforce can process.[6]
What drives this process? For an economist, the main question is how much value will the algorithm create with its decisions. Rational organisations will adopt algorithms with high expected values.
An algorithm’s expected value depends on two factors: its accuracy (the probability that it will make a correct decision), and the balance between the reward from a correct decision and the penalty from an erroneous one.[7] Riskier decisions (where penalties are big compared to rewards) should be made by highly accurate algorithms. You would not want a flaky robot running a nuclear power station, but it might be ok if it is simply advising you about what TV show to watch tonight.
2. Supervision: watch out
We could bring in human supervisors to check the decisions made by the algorithm and fix any errors they find. This makes more sense if the algorithm is not very accurate (supervisors do not spend a lot of time checking correct decisions), and the net benefits from correcting the wrong decisions (i.e., extra rewards plus avoided penalties) is high. Costs matter too. A rational organisation has more incentives to hire human supervisors if they do not get paid a lot, and if they are highly productive (i.e. it only takes a few of them to do the job).
Following from the example before, if a human supervisor fixes a silly recommendation in a TV website, this is unlikely to create a lot of value for the owner. The situation in a nuclear power station is completely different.
3. Scale: a race between machines and reality
What happens when we scale-up the number of algorithmic decisions? Are there any limits to its growth?
This depends on several things, including whether algorithms gain or lose accuracy as they make more decisions, and the costs of ramping-up algorithmic decision-making. In this situation, there are two interesting races going on.
1. There is a race between an algorithm’s ability to learn from the decisions it makes, and the amount of information that it obtains from new decisions. New machine learning techniques help algorithms ‘learn from experience’, making them more accurate as they make more decisions.[8] However, more decisions can also degrade an algorithm’s accuracy. Perhaps it is forced to deal with weirder cases, or new situations it is not trained to deal with.[9] To make things worse, when an algorithm becomes very popular (makes more decisions), people have more reasons to game it.
My prior is that the ‘entropic forces’ that degrade algorithm accuracy will win out in the end: no matter how much more data you collect, it is just impossible to make perfect predictions about a complex, dynamic reality.
2. The second race is between the data scientists creating the algorithms and the supervisors checking these algorithm’s decisions. Data scientists are likely to ‘beat’ the human supervisors because their productivity is higher: a single algorithm, or an improvement in an algorithm, can be scaled up over millions of decisions. By contrast, supervisors need to check each decision individually. This means that as the number of decisions increases, most of the organisation’s labour bill will be spent on supervision, with potentially spiralling costs as the supervision process gets bigger and more complicated.
What happens at the end?
When considered together, the decline in algorithmic accuracy and the increase in labour costs I just described are likely to limit the number of algorithmic decisions an organisation can make economically. But if and when this happens depends on the specifics of the situation.
Implications for organisations and policy
The processes I discussed above have many interesting organisational and policy implications. Here are some of them:
1. Finding the right algorithm-domain fit
As I said, algorithms making decisions in situations where the stakes are high need to be very accurate to make-up for high penalties when things go wrong.[10] On the flipside, if the penalty from making an error is low, even inaccurate algorithms might be up to the task.
For example, the recommendation engines in platforms like Amazon or Netflix often make irrelevant recommendations, but this is not a big problem because the penalty from these errors is relatively low – we just ignore them. Data scientist Hillary Parker picked up on the need to consider the fit between model accuracy and decision context a recent edition of the ‘Not So Standard Deviations’ podcast:
Most statistical methods have been tuned for the clinical trial implementation where you are talking about people’s lives and people dying with the wrong treatment, whereas in business settings the trade-offs are completely different.
One implication from this is that organisations in ‘low-stakes’ environments can experiment with new and unproven algorithms, including some with low-accuracy early on. As these are improved, they can be transferred to ‘high stake domains’. The tech companies that develop these algorithms often release them as open source software for others to download and improve, making these spill-overs possible.
2. There are limits to algorithmic decision-making in high stakes domains
Algorithms need to be applied much more carefully in domains where the penalties from errors are high, such as health or the criminal justice system, and when dealing with groups who are more vulnerable to algorithmic errors.[11] Only highly accurate algorithms are suitable for these risky decisions, unless they are complemented with expensive human supervisors who can find and fix errors. This will create natural limits to algorithmic decision-making: how many people can you hire to check an expanded number of decisions? Human attention remains a bottleneck to more decisions.
If policymakers want more and better use of algorithms in these domains, they should invest in R&D to improve algorithmic accuracy, encourage the adoption of high-performing algorithms from other sectors, and experiment with new ways of organising that help algorithms and their supervisors work better as a team.
Commercial organisations are not immune to some of these problems: YouTube has for example started blocking adverts in videos with less than ten thousand views. In those videos, the rewards from correct algorithmic ad-matching is probably low (they have low viewership) and the penalties could be high (many of these videos are of dubious quality). In other words, these decisions have low expected value, so YouTube has decided to stop making them. Meanwhile, Facebook just announced that it is hiring 3,000 human supervisors (almost a fifth of its current workforce) to moderate the content in its network. You could imagine how the need to supervise more decisions might put some brakes on its ability to scale up algorithmic decision-making indefinitely.
3. The pros and cons of crowdsourced supervision
One way to keep supervision costs low and coverage of decisions high is to crowdsource supervision to users, for example by giving them tools to report errors and problems. YouTube, Facebook and Google have all done this in response to their algorithmic controversies. Alas, getting users to police online services can feel unfair and upsetting. As Sarah T Roberts, a Law professor pointed out in a recent interview about the Facebook violent video controversy:
The way this material is often interrupted is because someone like you or me encounters it. This means a whole bunch of people saw it and flagged it, contributing their own labour and non-consensual exposure to something horrendous. How are we going to deal with community members who may have seen that and are traumatized today?
4. Why you should always keep a human in the loop
Even when penalties from error are low, it still makes sense to keep humans in the loop of algorithmic decision-making systems.[12] Their supervision provides a buffer against sudden declines in performance if (as) the accuracy of algorithms decreases. When this happens, the number of erroneous decisions detected by humans and the net benefit from fixing them increase. They can also ring the alarm, letting everyone know that there is a problem with the algorithms that needs fixing.[13]
This could be particularly important in situations where errors create penalties with a delay, or penalties that are hard to measure or hidden (say if erroneous recommendations result in self-fulfilling prophecies, or costs that are incurred outside the organisation).
There are many examples of this. In the YouTube advertising controversy, the big accumulated penalty from previous errors only became apparent with a delay, when brands noticed that their adverts were being posted against hate videos. The controversy with fake news after the US election is an example of hard to measure costs: algorithms’ inability to discriminate between real news and hoaxes creates costs for society, potentially justifying stronger regulations and more human supervision. Politicians have made this point when calling on Facebook to step up its fight against fake news in the run-up to the UK election:
Looking at some of the work that has been done so far, they don’t respond fast enough or at all to some of the user referrals they can get. They can spot quite quickly when something goes viral. They should then be able to check whether that story is true or not and, if it is fake, blocking it or alerting people to the fact that it is disputed. It can’t just be users referring the validity of the story. They [Facebook] have to make a judgment about whether a story is fake or not.
5. From abstract models to real systems
Before we use economic models to inform action, we need to define and measure model accuracy, penalties and rewards, changes in algorithmic performance due to environmental volatility, levels of supervision and their costs, and that is only the beginning.[14]
This is hard but important work that could draw on existing technology assessment and evaluation tools, including methods to quantify non-economic outcomes (e.g. in health).[15] One could even use rich data from an organisation’s information systems to simulate the impact of algorithmic decision-making and its organisation before implementing it. We are seeing more examples of these applications, such as the financial ‘regtech’ pilots that the European Commission is running, or the ‘collusion incubators’ mentioned in a recent Economist article on price discrimination.
Coda: Piecemeal social engineering in an age of algorithms
a practical and broadly applicable social-systems analysis [that] thinks through all the possible effects of AI systems on all parties [drawing on] philosophy, law, sociology, anthropology and science-and-technology studies, among other disciplines.
Calo and Crawford did not include economists in their list. Yet as this blog suggests, economics thinking has much to contribute to these important analyses and debates. Thinking about algorithmic decisions in terms of their benefits and costs, the organisational designs we can use to manage their downsides, and the impact of more decisions on the value that agorithms create can help us make better decisions about when and how to use them.
With every passing year, economics must become more and more about the design of the machines that mediate human social behaviour. A networked information system guides people in a more direct, detailed and literal way than does policy. Another way to put it is that economics must turn into a large-scale, systemic version of user interface design.
Designing organisations where algorithms and humans work together to make better decisions will be an important part of this agenda.
Acknowledgements
This blog benefited from comments from Geoff Mulgan, and was inspired by conversations with John Davies. The image above represents a precision-recall curve in a multi-label classification problem. It shows the propensity of a random forests classification algorithm to make mistakes when one sets different rules (probability thresholds) for putting observations in a category.
Appendix: Three economics vignettes about algorithmic decision-making
The three vignettes below are very simplified formalisations of algorithmic decision-making situations. My main inspiration was Human fallibility and economic organization, a 1985 paper by Joe Stiglitz and Raj Sah where the authors model how two organisational designs – hierarchies and ‘polyarchies’ (flat organisations) – cope with human error. Their analysis shows that hierarchical organisations where decision-makers lower in the hierarchy are supervised by people further up tend to reject more good projects, while polyarchies where agents make decisions independently from each other, tend to accept more bad projects. A key lesson from their model is that errors are inevitable, and the optimal organisational design depends on context.
Vignette 1: Algorithm says maybe
Let’s imagine an online video company that matches adverts with videos in its catalogue. This company hosts millions of videos so it would be economically inviable for it to rely on human labour to do the job. Instead, its data scientists develop algorithms to do this automatically. [16] The company looks for the algorithm that maximises the expected value of the matching decisions. This value depends on three factors: [17]
-Algorithm accuracy (a): The probability (between 0 and 1) that the algorithm will make the correct decision.[18]
-Decision reward (r): This is the reward when the algorithm makes the right decision
-Error penalty (p): This is the cost of making the wrong decision.
We can combine accuracy, benefit and penalty to calculate the expected value of the decision:
E = ar – (1-a)p [1]
This value is positive when the expected benefits from the algorithm’s decision outweigh the expected costs (or risks):
ar > (1-a)p [2]
Which is the same as saying that:
a/(1-a) > p/r [3]
The odds of making the right decision should be higher than the ratio between penalty and benefit.
Enter human
We can reduce the risk of errors by bringing a human supervisor into the situation. This human supervisor can recognise and fix errors in algorithmic decisions. The impact of this strategy on the expected value of a decision depends on two parameters:
-Coverage ratio (k): k is the probability that the human supervisor will check a decision by the algorithm. If k is 1, this means that all algorithmic decisions are checked by a human.
-Supervision cost (cs(k)): this is the cost of supervising the decisions of the algorithm. The cost depends on the coverage ratio k because checking more decisions takes time.
The expected value of an algorithmic decision with human supervision is the following:[19]
Es = ar + (1-a)kr – (1-a)kp – cs(k) [4]
This equation picks up the fact that some errors are detected and rectified, and others are not. We subtract [3] from [4] to obtain the extra expected value from supervision. After some algebra, we get this.
(r+p)(1-a)k > cs(k) [5]
Supervision only makes economic sense when its expected benefit (which depends on the probability that the algorithm has made a mistake, that this mistake is detected, and the net benefits from flipping a mistake into a correct decision) is larger than the cost of supervision.
Scaling up
Here, I consider what happens when we start increasing n, the number of decisions being made by the algorithm.
The expected value is:
E(n) = nar + n(1-a)kr – n(1-a)(1-k)p [6]
And the costs are C(n)
How do these things change as n grows?
I make some assumptions to simplify things: the organisation wants to hold k constant, and the rewards r and penalties p remain constant as n increases.[20]
This leaves us with two variables that change as n increases: a and C.
I assume that algorithmic accuracy a declines with the number of decisions because the processes that degrade accuracy are stronger than those that improve it
I assume that C, the production costs, only depend on the labour of data scientists and supervisors. Each of these two occupations gets paid a salary wds and ws.
Based on this, and some calculus, we get the changes in expected benefits as we make more decisions as:
∂E(n)/∂(n) = r + (a+n(∂a/∂n))*(1-k)(r+p) - p(1-k) [7]
This means that as more decisions are made, the aggregated expected benefits grow in a way that is modified by changes in the marginal accuracy of the algorithm. On the one hand, more decisions mean scaled up benefits from more correct decisions. On the other, the decline in accuracy generates an increasing number of errors and penalties. Some of these are offset by human supervisors.
This is what happens with costs:
∂C/∂n = (∂C/∂Lds)(∂Lds/∂n) + (∂C/dLs)(∂Ls/dn) [8]
As the number of decisions increases, costs grow because the organisation has to recruit more data scientists and supervisors.
[8] is the same as saying:
∂C/dn = wds/(∂Lds/dn) + ws/zs/(∂Ls/∂n) [9]
The labour costs of each occupation are directly related to its salary, and inversely related to its marginal productivity. If we assume that data scientists are more productive than supervisors, this means that most of the increases in costs with n will be caused by increases in the supervisor workforce.
The expected value (benefits minus costs) from decision-making for the organisation is maximised with an equilibrium number of decisions ne where the marginal value of an extra decision equals its marginal cost:
r + (a+nda/dn)(1-k)(r+p) - p(1-k) = wds/(∂Lds/∂n) + ws/zs/(∂Ls/∂n) [10]
Extensions
Above, I have kept things simple by making some strong assumptions about each of the situations being modelled. What would happen if we relaxed these assumptions?
Here are some ideas:
Varieties of error
First, the analysis does not take into account that different types of errors (e.g. false positives and negatives, errors made with different degrees of certainty etc.) could have different rewards and penalties. I have also assumed certainty in rewards and penalties, when it would be more realistic to model them as random draws from probability distributions. This extension would help incorporate fairness and bias into the analysis. For example, if errors are more likely to affect vulnerable people (who suffer higher penalties), and these errors are less likely to be detected, this could increase the expected penalty from errors.
Humans are not perfect either
All of the above assumes that algorithms err but humans do not. This is clearly not the case. In many domains, algorithms can be a desirable alternative to humans with deep-rooted biases and prejudices. In those situations, human’s ability to detect and address errors is impaired, and this reduces the incentives to recruit them (this is the equivalent to a decline in their productivity). Organisations deal with all this by investing on technologies (e.g. crowdsourcing platforms) and quality assurance systems (including extra layers of human and algorithmic supervision) that manage the risks of human and algorithmic fallibility.
Scaling up rewards and penalties
Before, I assumed that the marginal penalties and rewards remain constant as the number of algorithmic decisions increase. This need not be the case. The table below shows examples of situations where these parameters change with the number of decisions being made:
Increases with more decisions
Decreases with more decisions
Rewards
The organisation gains market power, or is able to use price discrimination in more transactions
The organisation runs out of valuable decisions to make.
Penalties
The organisation becomes more prominent and its mistakes receive more attention
Users get accustomed to errors
Getting an empirical handle on these processes is very important, as they could determine if there is a natural limit to the number of algorithmic decisions that an organisation can make economically in a domain or market, with potential implications for its regulation.
Endnotes
[1] I use the term ‘algorithm’ in a restricted sense, to refer to technologies that turn information into predictions (and depending on the system receiving the predictions, decisions). There are many processes to do this, including rule-based systems, statistical systems, machine learning systems and Artificial Intelligence (AI). These systems vary on their accuracy, scalability, interpretability, and ability to learn from experience, so their specific features should be considered in the analysis of algorithmic trade-offs.
[2] One could even say that machine learning is the science that manages trade-offs caused by the impossibility of eliminating algorithmic error. The famous ‘bias-variance’ trade off between fitting a model to known observations and predicting unknown ones is a good example of this.
[3] Some people would say that personalisation is undesirable because it can lead to discrimination and ‘filter bubbles’, but that is a question for another blog post.
[4] Dani Rodrik’s ‘Economics Rules’ makes a compelling case for models as simplistic but useful formalisations of complex reality.
[5] In a 2016 Harvard Business Review article, Ajay Agrawal and colleagues sketched out an economic analysis of machine learning as a technology that lowers the costs of prediction. My way of looking at algorithms is similar because predictions are inputs into decision-making.
[6] This includes personalised experiences and recommendations in e-commerce and social networking sites, or fraud detection and algorithmic trading in finance.
[7] For example, if YouTube shows me an advert which is highly relevant to my interests, I might buy the product, and this generates income for the advertiser, the video producer and YouTube. If it shows me a completely irrelevant or even offensive advert, I might stop using YouTube, or kick up a fuss in my social network of choice.
[8] Reinforcement learning builds agents that use the rewards and penalties from previous actions to make new decisions.
[9] This is what happened with the Google FluTrends system used to predict flu outbreaks based on google searches – people changed their search behaviour, and the algorithm broke down.
[10] In many cases, the penalties might be so high that we decide that an algorithm should never be used, unless it is supervised by humans.
[11] Unfortunately, care is not always taken when implementing algorithmic systems in high-stakes situations. Cathy O’Neil’s ‘Weapons of Maths Destruction’ gives many examples of this, going from the criminal justice system to university admissions.
[12] Mechanisms for accountability and due process are another example of human supervision.
[13] Using Albert Hirschmann’s model of exit, voice and loyalty, we could say that supervision plays the role of ‘voice’, helping organisations detect a decline in quality before users begin exiting.
[14] The appendix flags up some of my key assumptions, and suggests extensions.
[15] This includes rigorous evaluation of algorithmic decision-making and its organisation using Randomised Controlled Trial methods like those proposed by Nesta’s Innovation Growth Lab.
[16] This decision could be based on how well similar adverts perform when matched with different types of videos, on demographic information about the people who watch the videos, or other things.
[17] The analysis in this blog assumes that the results of algorithmic decisions are independent from each other. This assumption might be violated in situations where algorithms generate self-fulfilling prophecies (e.g. logically, a user is more likely to click an advert she is shown that one she is not). This is a hard problem to tackle, but researchers are developing methods based on randomisation of algorithmic decisions to address it.
[18] This does not distinguish between different types of error (e.g. false positives and false negatives). I come back to this at the end.
[19] Here, I am assuming that human supervisors are perfectly accurate. As we know from behavioural economics, this is a very strong assumption. I consider this issue at the end.
[20] I consider the implications of making different assumptions about marginal rewards and penalties at the end.
This post was originally published on Nesta. Click here to view the original.
Thursday night, dozens of robots designed and built by undergraduates in a mechanical engineering class endured hours of intense, boisterous, and often jubilant competition as they scrambled to rack up points in one-on-one clashes on special “Star Wars”-themed playing arenas.
As has often happened in these contests — which have been going on, and constantly evolving, since 1970 — the ultimate winner in the single-elimination tournament was not the one that’d most consistently racked up the highest scores all evening. Rather, it was a high-scoring bot that triumphed when its competitor missed a crucial scoring opportunity because its starting position was just slightly out of alignment.
The class, 2.007 (Design and Manufacturing I), which has 165 mostly sophomore students, begins by giving each student an identical kit of parts, from which they each have to create a robot to carry out a variety of tasks to score points. This year, in a nod to the 40th anniversary of the first “Star Wars” film, released in 1977, the robots crawled around and over a replica of a “Star Wars” X-wing Starfighter. Students could earn points by pulling up a sliding frame to rescue prisoners trapped in carbonite; by dumping Imperial stormtroopers into a trash trench; by activating a cantina band; or by spinning up one or both of two large cylindrical thrusters on the wings. Students could choose which tasks to have their robot try to accomplish, and had just one semester to design, test, and operate their bot.
The devices could be pre-programmed to carry out set tasks, but could also be manually controlled through a radio-linked controller. As in past years, the open-ended nature of the assignment — and the variety of different ways to score — led to a wide range of strategies and designs, spanning from tall towers that would extend by telescoping out or with hinged sections, to elevator-like lifting devices, to small and nimble bots that scurried around to carry out multiple tasks, to an array of arms and devices for grasping or turning the different pieces. They sported names like Dodocopter, Bonnie and Clyde, Pitfall, Torque Toilet, Spinit to Winit, and Nicki Spinaj.
Students could earn extra points by accomplishing any of the tasks during an initial period when the robot had to perform autonomously, before the start of a manually remote-controlled round. The students were allowed to create multiple robots to carry out different tasks, as long as they were all made from the basic kit of parts, and all fit within a designated starting area. Most of the students opted to build two devices, and some even made three.
Second-place finisher Richard Moyer, with his small but powerful and robust robot called Tornado, consistently scored 960.5 points in every round (the highest score achieved by any of the bots), by spinning both the lower and upper thrusters to their maximum speeds — and by using the lower thruster during the high-scoring autonomous period. But on the final matchup, Tornado was just slightly out of place in the starting box, and missed the thruster, losing out on that big initial score.
The robot used a simple but reliable design, which sported a single horizontally-mounted drive wheel that it used to spin both the lower and upper thrusters, and also to activate an elevator mechanism that carried it from one wing to the other. It was “like the Swiss army knife of robots,” thanks to this multifunction device, said Sangbae Kim, an associate professor of mechanical engineering and co-instructor of the course, who was dressed as the “Star Wars” wookie, Chewbacca.
The grand-prize winner, Tom Frejowski, also built a compact, powerful robot that concentrated on the spinning task, and scored 640 in the final round to take home the top trophy (a replica of the MIT dome). Frejowski’s robot, in order to ensure that it made a straight shot from the starting position to the thruster to line up just right to spin the heavy cylinder, used a single motor to drive both of its front wheels, which helped him earn consistent high scores. “That’s how he goes dead straight every time,” said co-instructor Amos Winter, an assistant professor of mechanical engineering, who was dressed as Darth Vader and shared the emcee duties with Kim.
During the tournament, which took place in the Johnson Ice Rink, all of the course teachers and assistants were dressed in various “Star Wars” costumes, and a packed audience of fellow students, families, and visitors of all ages cheered their encouragement with great enthusiasm. During a break, each of the teaching assistants was presented with a special memento: a beaver-cut twig from a beaver dam in Nova Scotia, symbolizing MIT’s beaver mascot, and nature’s original mechanical engineer.
Echoing the sentiments of many students in the class, sophomore James Li said of the class in a pre-taped video: “I had a bit of building experience, but I never had to design and build anything of this complexity. … It was a great experience.”
RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.
The competition has now grown into an international movement with a variety of leagues that go beyond soccer. Teams compete to make robots for rescue missions, the home, and industry. And it’s not just researchers, kids also have their own league. Last year, almost 3,000 participants and 1,200 robots competed.
To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.
This week, we take a whirlwind tour of the RoboCup competition, spanning all the leagues. You’ll hear about the history and ambitions of RoboCup from the trustees, and inspiring teams from around the world.
Short Version
Long Version
Can’t wait to watch the rest? You can view all the videos on the RoboCup playlist below:
https://www.youtube.com/playlist?list=PLEfaZULTeP_-bqFvCLBWnOvFAgkHTWbWC
Please spread the word! and if you would like to join a team, check here for more information.
We have developed a computationally efficient trajectory generator for six degrees-of-freedom multirotor vehicles, i.e. vehicles that can independently control their position and attitude. The trajectory generator is capable of generating approximately 500’000 trajectories per second that guide the multirotor vehicle from any initial state, i.e. position, velocity and attitude, to any desired final state in a given time. In this video, we show an example application that requires the evaluation of a large number of trajectories in real time.
Multirotor vehicle
The multirotor vehicle used in the demonstration is an omni-directional eight-rotor vehicle. Its unique actuator configuration gives it full force and torque authority in all three dimensions, allowing it to fly novel maneuvers. For more details, please refer to the Youtube video or the research paper: “Design, Modeling and Control of an Omni-Directional Aerial Vehicle”, IEEE International Conference on Robotics and Automation (ICRA), 2016.
Researchers
Dario Brescianini and Raffaello D’Andrea
Institute for Dynamic Systems and Control (IDSC), ETH Zurich, Switzerland – http://www.idsc.ethz.ch
This work is supported by and builds upon prior contributions by numerous collaborators in the Flying Machine Arena project. See the list here. This research was supported by the Swiss National Science Foundation (SNSF).
Robo Done, the robotic academy franchise for kids from Osaka, Japan, celebrated Japan’s Day of the Children on the 5th of May at their annual event, Robot Festival 2017 or RoboFes. The event welcomed over 1,000 attendees, including children and their parents.
This was the 2nd time Robo Done has celebrated the festival. In only one year, the number of attendees has increased threefold (350 attendees in 2016 to over 1,012 in 2017). It was celebrated in the KANDAI MeRise Campus of the Kansai University in Osaka, Japan and has become the biggest event at the campus.
The main activity was the Robot Contest, using LEGO Mindstorm, with morning and afternoon leagues. Over 200 children — from 6 years and up — participated in the championship. The kids built robots in pairs and programmed their creations, repeating the process of trial-and-error against a time limit. Several IT and robot related companies had booths, as well as, students of the university, which offered a variety of activities for the kids to enjoy.
Robo Done will hold RoboFes again in 2018, hoping to inspire even more kids to enjoy robotics and programming. We hope RoboFes will become a regular event during Japan’s “Golden Week!”
Science and technology are essential tools for innovation, and to reap their full potential, we also need to articulate and solve the many aspects of today’s global issues that are rooted in the political, cultural, and economic realities of the human world. With that mission in mind, MIT’s School of Humanities, Arts, and Social Sciences has launched The Human Factor — an ongoing series of stories and interviews that highlight research on the human dimensions of global challenges. Contributors to this series also share ideas for cultivating the multidisciplinary collaborations needed to solve the major civilizational issues of our time.
David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing and Professor of Aeronautics and Astronautics at MIT, researches the intersections of human behavior, technological innovation, and automation. Mindell is the author of five acclaimed books, most recently “Our Robots, Ourselves: Robotics and the Myths of Autonomy” (Viking, 2015). He is also the co-founder of Humatics Corporation, which develops technologies for human-centered automation. SHASS Communications recently asked him to share his thoughts on the relationship of robotics to human activities, and the role of multidisciplinary research in solving complex global issues.
Q: A major theme in recent political discourse has been the perceived impact of robots and automation on the United States labor economy. In your research into the relationship between human activity and robotics, what insights have you gained that inform the future of human jobs, and the direction of technological innovation?
A: In looking at how people have designed, used, and adopted robotics in extreme environments like the deep ocean, aviation, or space, my most recent work shows how robotics and automation carry with them human assumptions about how work gets done, and how technology alters those assumptions. For example, the U.S. Air Force’s Predator drones were originally envisioned as fully autonomous — able to fly without any human assistance. In the end, these drones require hundreds of people to operate.
The new success of robots will depend on how well they situate into human environments. As in chess, the strongest players are often the combinations of human and machine. I increasingly see that the three critical elements are people, robots, and infrastructure — all interdependent.
Q: In your recent book “Our Robots, Ourselves,” you describe the success of a human-centered robotics, and explain why it is the more promising research direction — rather than research that aims for total robotic autonomy. How is your perspective being received by robotic engineers and other technologists, and do you see examples of research projects that are aiming at human-centered robotics?
A: One still hears researchers describe full autonom as the only way to go; often they overlook the multitude of human intentions built into even the most autonomous systems, and the infrastructure that surrounds them. My work describes situated autonomy, where autonomous systems can be highly functional within human environments such as factories or cities. Autonomy as a means of moving through physical environments has made enormous strides in the past ten years. As a means of moving through human environments, we are only just beginning. The new frontier is learning how to design the relationships between people, robots, and infrastructure. We need new sensors, new software, new ways of architecting systems.
Q: What can the study of the history of technology teach us about the future of robotics?
A: The history of technology does not predict the future, but it does offer rich examples of how people build and interact with technology, and how it evolves over time. Some problems just keep coming up over and over again, in new forms in each generation. When the historian notices such patterns, he can begin to ask: Is there some fundamental phenomenon here? If it is fundamental, how is it likely to appear in the next generation? Might the dynamics be altered in unexpected ways by human or technical innovations?
One such pattern is how autonomous systems have been rendered less autonomous when they make their way into real world human environments. Like the Predator drone, future military robots will likely be linked to human commanders and analysts in some ways as well. Rather than eliding those links, designing them to be as robust and effective as possible is a worthy focus for researchers’ attention.
Q: MIT President L. Rafael Reif has said that the solutions to today’s challenges depend on marrying advanced technical and scientific capabilities with a deep understanding of the world’s political, cultural, and economic realities. What barriers do you see to multidisciplinary, sociotechnical collaborations, and how can we overcome them?
A: I fear that as our technical education and research continues to excel, we are building human perspectives into technologies in ways not visible to our students. All data, for example, is socially inflected, and we are building systems that learn from those data and act in the world. As a colleague from Stanford recently observed, go to Google image search and type in “Grandma” and you’ll see the social bias that can leak into data sets — the top results all appear white and middle class.
Now think of those data sets as bases of decision making for vehicles like cars or trucks, and we become aware of the social and political dimensions that we need to build into systems to serve human needs. For example, should driverless cars adjust their expectations for pedestrian behavior according to the neighborhoods they’re in?
Meanwhile, too much of the humanities has developed islands of specialized discourse that is inaccessible to outsiders. I used to be more optimistic about multidisciplinary collaborations to address these problems. Departments and schools are great for organizing undergraduate majors and graduate education, but the old two-cultures divides remain deeply embedded in the daily practices of how we do our work. I’ve long believed MIT needs a new school to address these synthetic, far-reaching questions and train students to think in entirely new ways.
Interview prepared by MIT SHASS Communications
Editorial team: Emily Hiestand (series editor), Daniel Evans Pritchard
Sailors assigned to Explosive Ordnance Disposal Mobile Unit 5 (EODMU5) Platoon 142 recover an unmanned underwater vehicle onto a Coastal Riverine Group 1 Detachment Guam MK VI patrol boat in the Pacific Ocean May 10, 2017. Credit: Mass Communication Specialist 1st Class Torrey W. Lee/ U.S. Navy
May 8, 2017 – May 14, 2017
If you would like to receive the Weekly Roundup in your inbox, please subscribe at the bottom of the page.
News
The International Civil Aviation Organization announced that it plans to develop global standards for small unmanned aircraft traffic management. In a statement at the Association of Unmanned Vehicle Systems International’s Xponential trade conference, the United Nations agency said that as part of the initiative it has issued a Request for Information on air traffic management systems for drones. (GPS World)
Virginia Governor Terry McAuliffe has created a new office dedicated to drones and autonomous systems. According to Gov. McAuliffe, the Autonomous Systems Center for Excellence will serve as a “clearinghouse and coordination point” for research and development programs related to autonomous technologies. (StateScoop)
Commentary, Analysis, and Art
At the Telegraph, Alan Tovey writes that the U.K.’s exit from the European Union is unlikely to affect cross-channel cooperation on developing fighter drones.
Nautilus, a California startup, is developing a cargo drone that could carry thousands of pounds of goods over long distances. (Air & Space Magazine)
Drone maker Pulse Aerospace unveiled two new rotorcraft drones for military and commercial applications, the Radius 65 and the Vapor 15. (Press Release)
Piaseki Aerospace will likely submit its ARES demonstrator drone for the U.S. Marine Corps’ Unmanned Expeditionary Capabilities program. (FlightGlobal)
Defense firm Kratos confirmed that it has conducted several demonstration flights of a high performance jet drone for an undisclosed customer. (FlightGlobal)
Technology firm Southwest Research Institute has been granted a patent for a system by which military drones can collaborate with unmanned ground vehicles. (Unmanned Aerial Online)
The U.S. Army is interested in developing a mid-size unmanned cargo vehicle that could carry up to 800 pounds of payload. (FlightGlobal)
A drone flying over a bike race in in Rancho Cordova, California crashed into a cyclist. (Market Watch)
Meanwhile, a consumer drone crashed into a car crossing the Sydney Harbor Bridge in Australia. It is the second time a drone has crashed at the site of the bridge in the past nine months. (Sydney Morning Herald)
Insurance company Travelers has trained over 150 drone operators to use drones for insurance appraisals over properties. (Insurance Journal)
A Latvian technology firm used a large multirotor drone to carry a skydiver to altitude before he parachuted back down to earth. (Phys.org)
Clear Flight Solutions and AERIUM Analytics are set to begin integrating the Robird drone system, a falcon-like drone that scares birds away from air traffic, at Edmonton International Airport. (Unmanned Systems Technology)
Industry Intel
The U.S. Army awarded General Atomics Aeronautical Systems a $221.6 million contract modification for 20 extended range Gray Eagle drones and associated equipment. (DoD)
The U.S. Air Force awarded General Electric a $14 million contract for work that includes the Thermal Management System for unmanned aircraft. (DoD)
Turkish Aerospace Industries will begin cooperating with ANTONOV Company on the development of unmanned systems. (Press Release)
Aker, a company that develops drones for agriculture, won $950,000 in funding from the Clean Energy Trust Challenge. (Chicago Tribune)
For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.