Archive 21.07.2017

Page 2 of 4
1 2 3 4

Multi-directional gravity assist harness helps rehabilitation

Credit: EPFL

When training to regain movement after stroke or spinal cord injury (SCI), patients must once again learn how to keep their balance during walking movements. Current clinical methods support the weight of the patient during movement, while setting the body off balance. This means that when patients are ready to walk without mechanical assistance, it can be hard to re-train the body to balance against gravity. This is the issue addressed in a recent paper published in Science Translational Medicine by a team lead by Courtine-Lab, and featuring Ijspeert Lab, NCCR Robotics and EPFL.

During walking, a combination of forces move the human body forward. In fact, the interaction of feet with the ground creates the majority of forward propulsion, but with every step, multiple muscles in the body are engaged to maintain movement and prevent falls. In order to fully regain the ability to walk, patients must develop both the muscles and the neural pathways required in these movements.

During partial body weight-supported gait therapy (whereby a patient trains on a treadmill while a robotic support system prevents them from falling), a patient is merely lifted upwards, with no support for forward or sideways movements, massively altering how the person within the support system moves. In fact, those within the training system use shorter steps, slower movements and less body rotation than the same people tested walking unaided.

In an effort to reduce these limitations of current therapy methods, the team developed a multi-directional gravity assist mechanism, meaning that the system supports patients not only in remaining upright, but also in moving forwards. This individually tailored support allows patients to walk in a natural and comfortable way, training the body to counterbalance against gravity and repositioning the torso in a natural position for walking.

The team developed a system, RYSEN, which allows patients to operate within a wide area, and in a range of activities, from standing and walking to walking along a slalom or horizontal ladder light projected onto the floor. They developed an algorithm to take measurements of how the patient is walking, and update the support given to them as they complete their training. The team found that all patients required the system to be tailored to them before use, but that by configuring the upward and forward forces applied during training, almost all subjects experienced significant improvements in movement with even small upward and forward forces on their torso. In fact, patients who experienced paralysis after SCI and stroke, found that by using the system, they were able to walk and thus begin to rebuild muscles and neurological pathways.

This work exists within a larger framework at NCCR Robotics, whereby researchers are using gravity-assisted technologies to play a key role in clinical trials on electrical spinal cord stimulation with the ultimate aim of creating technologies that will improve rehabilitation after spinal cord injury and stroke.

 

 

Reference:
Mignardot, J.-B., Le Goff, C. G., van den Brand, R., Capogrosso, M., Fumeaux, N., Vallery, H., Anil, S., Lanini, J., Fodor, I., Eberle, G., Ijspeert, A., Schurch, B., Curt, A., Carda, S., Bloch, J., von Zitzewitz, J. and Courtine, G., “A multidirectional gravity-assist algorithm that enhances locomotor control in patients with stroke or spinal cord injury“, Science Translational Medicine, 2017.

 

Image: EPFL

 

Federal regulations pass next hurdle

This week’s news is preliminary, but a U.S. house committee panel passed some new federal regulations which suggest sweeping change in the US regulatory approach to robocars.

Today, all cars sold must comply with the Federal Motor Vehicles Safety Standards (FMVSS). This is a huge set of standards, and it’s full of things written with human driven cars in mind, and making a radically different vehicle, like the Zoox, or the Waymo Firefly, or a delivery robot, is simply not going to happen under those standards. There is a provision where NHTSA can offer exemptions but it’s in small volumes, for prototype and testing vehicles mostly. The new rules would allow a vendor to get an exemption to make 100,000 vehicles per year, which should be enough for the early years of robocar deployment.

Secondly, these and other new regulations would preempt state regulations. Most players (except some states) have pushed for this. Many states don’t want the burden of regulating robocar design, since they don’t have the resources to do so, and most vendors don’t want what they call a “patchwork” of 50 regulations in the USA. My take is different. I agree the cost of a patchwork is not to be ignored, but the benefits of having jurisdictional competition may be much greater. When California proposed to ban vehicles like the Google Firefly, Texas immediately said, “Come to Texas, we won’t get in your way.” That pushed California to rethink. Having one regulation is good — but it has to be the right regulation, and we’re much too early in the game to know what the right regulation is.

This is just a committee in the house, and there is lots more distance to go, including the Senate and all the other usual hurdles. Whatever people thought about how much regulation there should be, everybody has known that the FMVSS needs a difficult and complex revision to work in the world of robocars, and a temporary exemption can be a solution to that.

New Horizon 2020 robotics projects, 2016: HEPHAESTUS

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). 

Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features HEPHAESTUS: Highly automatEd PHysical Achievements and performancES using cable roboTs Unique SysHighly automatEd PHysical Achievements and performancES using cable roboTs Unique Systemstems.

Objectives

Hephaestus project addresses novel concepts to introduce Robotics and Autonomous Systems use in the Construction Sector where the presence of this type of products is minor or almost non-existent. It focuses to give novel solutions to one of the most important parts of the construction sector, the part related to the facades and the works that need to be done when this part of a building is built or need maintenance. It proposes a new automatized way to install these products providing a whole solution not only highly industrialized in production but also in installation and maintenance.

Expected impact

Hephaestus aims at automating the On-site Execution or Installation process for empowering and strengthening the Construction Sector in Europe and for positioning the European Robotic Industry as leader and reference in the huge and new growing market for the robotics. Hephaestus solution will allow reducing up to 90% the number of work accidents during façade installation process, reducing around 20% of installation cost and around 44% of the annual maintenance and cleaning costs. Curtain wall construction currently accounts for an annual market of €30,000 million in Europe.

Partners

FUNDACIÓN TECNALIA R&I 
TECHNISCHE UNIVERSITÄT MÜNCHEN
FRAUNHOFER- IPA
CNRS-LIRMM 
CEMVISA VICINAY
NLINK AS 

Coordinator:

Coordinator: Julen Astudillo Larraz, TECNALIA
Julen.astudillo@tecnalia.com

Project website: www.hephaestus-project.eu

Watch all EU-projects videos

If you enjoyed reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Artificial intelligence suggests recipes based on food photos

Pic2Recipe, an artificial intelligence system developed at MIT, can take a photo of an entree and suggest a similar recipe to it. Photo: Jason Dorfman/MIT CSAIL

There are few things social media users love more than flooding their feeds with photos of food. Yet we seldom use these images for much more than a quick scroll on our cellphones. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analyzing photos like these could help us learn recipes and better understand people’s eating habits.

In a new paper with the Qatar Computing Research Institute (QCRI), the team trained an artificial intelligence system called Pic2Recipe to look at a photo of food and be able to predict the ingredients and suggest similar recipes.

“In computer vision, food is mostly neglected because we don’t have the large-scale datasets needed to make predictions,” says Yusuf Aytar, an MIT postdoc who co-wrote a paper about the system with MIT Professor Antonio Torralba. “But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences.”

 The paper will be presented later this month at the Computer Vision and Pattern Recognition conference in Honolulu. CSAIL graduate student Nick Hynes was lead author alongside Amaia Salvador of the Polytechnic University of Catalonia in Spain. Co-authors include CSAIL postdoc Javier Marin, as well as scientist Ferda Ofli and research director Ingmar Weber of QCRI.

How it works

The web has spurred a huge growth of research in the area of classifying food data, but the majority of it has used much smaller datasets, which often leads to major gaps in labeling foods.

In 2014 Swiss researchers created the “Food-101” dataset and used it to develop an algorithm that could recognize images of food with 50 percent accuracy. Future iterations only improved accuracy to about 80 percent, suggesting that the size of the dataset may be a limiting factor.

Even the larger datasets have often been somewhat limited in how well they generalize across populations. A database from the City University in Hong Kong has over 110,000 images and 65,000 recipes, each with ingredient lists and instructions, but only contains Chinese cuisine.

The CSAIL team’s project aims to build off of this work but dramatically expand in scope. Researchers combed websites like All Recipes and Food.com to develop “Recipe1M,” a database of over 1 million recipes that were annotated with information about the ingredients in a wide range of dishes. They then used that data to train a neural network to find patterns and make connections between the food images and the corresponding ingredients and recipes.

Given a photo of a food item, Pic2Recipe could identify ingredients like flour, eggs, and butter, and then suggest several recipes that it determined to be similar to images from the database. (The team has an online demo where people can upload their own food photos to test it out.)

“You can imagine people using this to track their daily nutrition, or to photograph their meal at a restaurant and know what’s needed to cook it at home later,” says Christoph Trattner, an assistant professor at MODUL University Vienna in the New Media Technology Department who was not involved in the paper. “The team’s approach works at a similar level to human judgement, which is remarkable.”

The system did particularly well with desserts like cookies or muffins, since that was a main theme in the database. However, it had difficulty determining ingredients for more ambiguous foods, like sushi rolls and smoothies.

It was also often stumped when there were similar recipes for the same dishes. For example, there are dozens of ways to make lasagna, so the team needed to make sure that system wouldn’t “penalize” recipes that are similar when trying to separate those that are different. (One way to solve this was by seeing if the ingredients in each are generally similar before comparing the recipes themselves).

In the future, the team hopes to be able to improve the system so that it can understand food in even more detail. This could mean being able to infer how a food is prepared (i.e. stewed versus diced) or distinguish different variations of foods, like mushrooms or onions.

The researchers are also interested in potentially developing the system into a “dinner aide” that could figure out what to cook given a dietary preference and a list of items in the fridge.

“This could potentially help people figure out what’s in their food when they don’t have explicit nutritional information,” says Hynes. “For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal.”

The project was funded, in part, by QCRI, as well as the European Regional Development Fund (ERDF) and the Spanish Ministry of Economy, Industry, and Competitiveness.

Robots to the rescue!

Emily – short for Emergency Integrated Lifesaving Lanyard – is a remote-controlled rescue boat used by lifeguards to save people’s life at sea (Photo: Hydrolanix – EMILY robot)

This article was first published on the IEC e-tech website.

Rapid advances in technology are revolutionizing the roles of aerial, terrestrial and maritime robotic systems in disaster relief, search and rescue (SAR) and salvage operations. Robots and drones can be deployed quickly in areas deemed too unsafe for humans and are used to guide rescuers, collect data, deliver essential supplies or provide communication services.

Well-established use

The first reported use of SAR robots was to explore the wreckage beneath the collapsed twin towers of the World Trade Center in New York after the September 2001 terrorist attacks. Drones and robots have been used to survey damage after disasters such as the Fukushima Daiichi nuclear power plant accident in Japan in 2011 and the earthquakes in Haiti (2010) and Nepal (2015). Up to now, more than 50 deployments of disaster robots have been documented throughout the world, according to the Texas-based Center for Robot‑Assisted Search & Rescue (CRASAR).

Robin Murphy, head of CRASAR and author of the book Disaster Roboticssays:

The impact of earthquakes, hurricanes, flooding […] is increasing, so the need for robots for all phases of a disaster, from prevention to response and recovery, will increase as well.

Eyes in the sky

Drones, also known as unmanned aerial vehicles (UAVs), can be used to detect and enter damaged buildings, assisting rescue robots and responders on the ground by speeding up the search for survivors through prioritizing which areas to search first. The more quickly SAR teams respond, the higher the survival rate is likely to be. Rescue drones create real-time maps by taking aerial surveys and send back photos, videos and sensor data to support damage assessments.

Drones used for SAR and disaster relief are most commonly powered by rechargeable batteries and are operated autonomously through onboard computers or by remote control. Their equipment typically comprises radar and laser scanners, multiple sensors and video and optical cameras as well as infrared cameras that are used to identify heat signatures of human bodies and other objects. This helps rescuers to locate survivors at night and in large, open environments and to identify hot spots from fires. Listening devices can pick up hard-to-hear audio, while Wi-Fi antennas and other attachments detect signals given off by mobile phones and plot a map that outlines the locations of victims.

New technologies in use or development for rescue drones and robots include ways of increasing survivor detection. Sensors scan areas for heartbeats and breathing, multisensor probes respond to odours or sounds and chemical sensors signal the presence of gases.

Standards put safety first

Much of the technology used in drones comes from commodity electronics developed for consumer essentials like mobile phones. Drones also require global positioning satellite (GPS) units, wireless transmitters, signal processors and microelectromechanical systems (MEMS). The flight controller also collects data from barometric pressure and airspeed sensors.

IEC International Standards produced by a range of IEC Technical Committees (TCs) and Subcommittees (SCs) cover the components of drones such as batteries, MEMS and other sensors, with an emphasis on safety and interoperability.

IEC TC 47: Semiconductor devices, and its SC 47F: Micro electromechanical systems, are responsible for compiling a wide range of International Standards for the semiconductor devices used in sensors and the MEMS essential to the safe operation of drone flights. These include accelerometers, altimeters, magnetometers (compasses), gyroscopes and pressure sensors. IEC TC 56: Dependability, covers the reliability of electronic components and equipment.

IEC TC 2: Rotating machinery, prepares International Standards covering specifications for rotating electrical machines, while IEC TC 91: Electronics assembly technology, is responsible for standards on electronic assembly technologies including components.

IEC SC 21A: Secondary cells and batteries containing alkaline or other non-acid electrolytes, compiles International Standards for batteries used in mobile applications, as well as for large-capacity lithium cells and batteries.

Ideal for isolated and remote hard-to-access areas

Using drones is useful not only when natural disasters make access by air, land, sea or road difficult, but also in isolated regions that lack accessible infrastructure. Recently, drones have started delivering medical supplies in areas where finding emergency healthcare is extremely difficult. In 2014, Médecins Sans Frontières piloted the use of drones to deliver vaccines and medicine in Papua New Guinea. In 2016, the US robotics company Ziplinelaunched a drone delivery service, in partnership with the government of Rwanda, to supply blood and medical supplies throughout the mountainous East African country. Zipline says its battery-powered drones can fly 120 km on a single charge to deliver medicine speedily, without the need for refrigeration or insulation.

Rwanda launches world’s first drone service to deliver blood to patients in remote areas of the country (Photo: Zipline)

A project by a company in the Netherlands to help refugees who get into difficulty in the Mediterranean Sea offers another example of drones being used for humanitarian purposes. Its search and rescue (SAR) drone is intended to fly over long distances, detect boats and drop life jackets, life buoys, food and medicine if necessary.

Currently only about a quarter of the world’s countries regulate the use of drones. Their deployment in disaster relief operations poses challenges involving regulatory issues, particularly when decisions are made on an ad hoc basis by local and national authorities. Humanitarian relief agencies also warn of the risks of relief drones being mistaken for military aircraft.

Two arms good, four arms better

Japan and the US lead the world in the development of rescue and disaster relief robots. Teams from both countries collaborated in recovery efforts after an earthquake and tsunami hit Japan in March 2011, causing a meltdown at the Fukushima nuclear power plant. While a Japanese team operated an eight-meter long snake-like robot fitted with a camera, the US contribution included two remote-controlled robots. The first was a lightweight 22 kg model previously used for bomb disposal and other military tasks before being reconfigured for disaster relief operations. The larger US model, capable of lifting up to 90 kg, was adapted from a device originally used for firefighting and clearing rubble.

Endeavor Robotics 510 Packbots are deployed in emergency situations when direct human intervention is dangerous or impossible, such as after industrial or nuclear accidents. These robots were used at the Fukushima nuclear power plant (Photo: Endeavor Robotics)

In 2017, Japanese researchers unveiled a prototype drone-robot combination for use in disaster relief work. It consisted of a vision-guided robot equipped with sensitive measuring systems, including force sensors, and a drone tethered to the robot. Four fish‑eye cameras mounted on the drone capture video of overhead views, allowing the robot’s operator to assess damage in the surrounding area.

Another Japanese rescue robot unveiled in 2017 is a multi-limbed robot that is 1,7 m tall, with four arms capable of independent operation and four caterpillar treads for mobility. Called The Octopus, it is capable of lifting 200 kg with each arm, crossing uneven terrain and lifting itself over obstacles with two arms while the other two clear debris.

Octopus, developed by Japan’s Waseda University a robot designed to clear rubble in disaster areas including nuclear facilities (Photo: Waseda University)

In the US, researchers are exploring ways in which a small and light collapsible, origami‑inspired robot, first produced by the National Aeronautics and Space Administration (NASA), could be adapted for use as a rescue robot. The device, known as PUFFER (Pop‑Up Flat Folding Explorer Robot) is designed to pack nearly flat for transport, and then re‑expand on site to explore tight nooks and crannies which are inaccessible to larger robots.

Over and underwater too

Given that 80% of the world’s population lives near water, maritime robotic vehicles can also play an important role in disaster relief by inspecting critical underwater infrastructure, mapping damage and identifying sources of pollution to harbours and fishing areas. Maritime robots helped to reopen ports and shipping channels in both Japan and Haiti after major earthquakes in 2011 and 2010 respectively.

In the Mediterranean, a battery‑powered robotic device first developed for use by lifeguards to rescue swimmers has been adapted to help rescue refugees crossing the Aegean Sea from Turkey. This maritime robot has a maximum cruising speed of 35 km/h and can function as a flotation device for 4 people.

Easy to find components and technology
Rescue robots use components and technology found in most other robots used for commercial purposes. Actuators and other electric motors, accelerometers, gyroscopes and dozens of sensors and cameras providing 360⁰ views enable these robots to maintain balance while moving over uneven ground covered with rubble or debris, and to get a sense of the environment around them.

A robot operating in a hazardous environment needs independent power and sensors for specific environments. It may be cut off from its human operator when communication signals are patchy. When remote operation guided by sensor data becomes impossible, a rescue robot needs the ability to make decisions on its own, using machine learning or other artificial intelligence (AI) algorithms.

Several IEC TCs and SCs cooperate on the development of International Standards for the broad range of electrotechnical systems, equipment and applications used in rescue robots. In addition to IEC TC 47: Semiconductor devices and IEC SC 47F: Microelectromechanical systems, mentioned above, other IEC TCs involved in standardization work for specific areas affecting rescue and disaster relief robots include IEC TC 44: Safety of machinery – Electrotechnical aspects; IEC TC 2: Rotating machinery; IEC TC 17: Switchgear and controlgear; and IEC TC 22: Power electronic systems and equipment.

Where humans fear to tread
The number of disasters recorded globally has doubled since the 1980s, with damage and losses estimated at an average of USD 100 billion a year since the start of the new millennium, according to the Overseas Development Institute (ODI), a UK think tank. This trend is likely to lead to increased demand for unmanned robotic devices that can assist disaster relief operations on land, in the air and at sea.

Robots of all kinds play a growing role in supporting SAR teams. Increasing autonomy will create more capable ground robots, while a combination of rapid technological advances and regulation should see the market for disaster relief drones soar over the next five years.

MarketsandMarkets, a research company, estimated in October 2016 that the total global market for drones, comprising commercial and military sales, would grow at a compound annual growth rate (CAGR) of nearly 20% between 2016 and 2022 to exceed USD 21 billion. Drones designed for humanitarian and disaster relief operations will account for 10% of the future drone market, according to the US‑based Association of Unmanned Vehicle Systems International (AUVSI).

FIRST global competition off to a rousing start with all teams getting visas

After much uproar, media attention, and political pressure, Pres. Trump intervened and enabled all the teams headed to Washington, DC for the F.I.R.S.T Global Robotics Championship, whose visas had been held up or denied, to get their visas. Some visas were received as late as two days before the event. Although the Afghan team got all the press, the team from Gambia was also denied when they first applied.

The three-day event which started Sunday evening in Washington, DC with opening ceremonies, has teams from 157 countries (including the Afghanistan and Gambia teams) — and some multinational teams representing continents. FIRST has organized competitions for many years but this is the first year it is hosting an international competition.

FIRST Global founder Dean Kamen, the inventor who created the Segway, said: “The competition’s objective is not just to teach children to build robots and explore careers in science, technology, engineering and math; it drives home the lesson of the importance of cooperation — across languages, cultures and borders. FIRST Global is getting them [teams from around the world] at a young age to learn how to communicate with each other, cooperate with each other and recognize that we’re all going to succeed together or we’re all going down together.” 

Ivanka Trump met with the Afghan girls team and also put in an appearance at the competition. She Tweeted: “It is a game everyone can play and where everyone can turn pro!”

The Washington Post described another set of problems that got resolved in an unusual way regarding the team from Iran:

Because of sanctions, FIRST Global was unable to ship a robotics kit to Iran, where a group of teenagers awaited the parts to build a robot. That might have spelled the end of the team’s shot of going to the world championships. But the organization introduced the Iranian team to a group of teenage robotics enthusiasts at George C. Marshall High School in Falls Church, Va., calling themselves Team Gryphon. The team in Iran sketched out blueprints on the computer and sent the designs to their counterparts across the ocean and then corresponded over Skype.

Sunday, the team flew the Iranian flag at their station next to the flag of Team Gryphon — a black flag with a purple silhouette of the gryphon — as a sign of their unlikely partnership.

For Mohammadreza Karami, the team’s mentor, it was an inspiring example of cooperation. “It’s possible to solve all of the world’s problems if we put aside our politics and focus on peace,” Karami said.

Kirsten Springer, a 16-year-old rising junior at Marshall High, said she didn’t want the Iranian team to be locked out of the competition just because of the sanctions. “Everybody should be able to compete … and to learn and to use that experience for other aspects of their life,” she said.

Now that all the teams are present and the competition has begun, we wish them all the best, hope they have a lot of fun, particularly hope they meet and befriend a lot of fellow robot enthusiasts, and thank them all for participating and their team mentors and supporters for helping make this all happen.

Bringing neural networks to cellphones

Image: Jose-Luis Olivares/MIT

In recent years, the best-performing artificial-intelligence systems — in areas such as autonomous driving, speech recognition, computer vision, and automatic translation — have come courtesy of software systems known as neural networks.

But neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

Last year, MIT associate professor of electrical engineering and computer science Vivienne Sze and colleagues unveiled a new, energy-efficient computer chip optimized for neural networks, which could enable powerful artificial-intelligence systems to run locally on mobile devices.

Now, Sze and her colleagues have approached the same problem from the opposite direction, with a battery of techniques for designing more energy-efficient neural networks. First, they developed an analytic method that can determine how much power a neural network will consume when run on a particular type of hardware. Then they used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The researchers describe the work in a paper they’re presenting next week at the Computer Vision and Pattern Recognition Conference. In the paper, they report that the methods offered as much as a 73 percent reduction in power consumption over the standard implementation of neural networks, and as much as a 43 percent reduction over the best previous method for paring the networks down.

Energy evaluator

Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. Different types of networks vary according to their number of layers, the number of connections between the nodes, and the number of nodes in each layer.

The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation.

“The first thing we did was develop an energy-modeling tool that accounts for data movement, transactions, and data flow,” Sze says. “If you give it a network architecture and the value of its weights, it will tell you how much energy this neural network will take. One of the questions that people had is ‘Is it more energy efficient to have a shallow network and more weights or a deeper network with fewer weights?’ This tool gives us better intuition as to where the energy is going, so that an algorithm designer could have a better understanding and use this as feedback. The second thing we did is that, now that we know where the energy is actually going, we started to use this model to drive our design of energy-efficient neural networks.”

In the past, Sze explains, researchers attempting to reduce neural networks’ power consumption used a technique called “pruning.” Low-weight connections between nodes contribute very little to a neural network’s final output, so many of them can be safely eliminated, or pruned.

Principled pruning

With the aid of their energy model, Sze and her colleagues — first author Tien-Ju Yang and Yu-Hsin Chen, both graduate students in electrical engineering and computer science — varied this approach. Although cutting even a large number of low-weight connections can have little effect on a neural net’s output, cutting all of them probably would, so pruning techniques must have some mechanism for deciding when to stop.

The MIT researchers thus begin pruning those layers of the network that consume the most energy. That way, the cuts translate to the greatest possible energy savings. They call this method “energy-aware pruning.”

Weights in a neural network can be either positive or negative, so the researchers’ method also looks for cases in which connections with weights of opposite sign tend to cancel each other out. The inputs to a given node are the outputs of nodes in the layer below, multiplied by the weights of their connections. So the researchers’ method looks not only at the weights but also at the way the associated nodes handle training data. Only if groups of connections with positive and negative weights consistently offset each other can they be safely cut. This leads to more efficient networks with fewer connections than earlier pruning methods did.

“Recently, much activity in the deep-learning community has been directed toward development of efficient neural-network architectures for computationally constrained platforms,” says Hartwig Adam, the team lead for mobile vision at Google. “However, most of this research is focused on either reducing model size or computation, while for smartphones and many other devices energy consumption is of utmost importance because of battery usage and heat restrictions. This work is taking an innovative approach to CNN [convolutional neural net] architecture optimization that is directly guided by minimization of power consumption using a sophisticated new energy estimation tool, and it demonstrates large performance gains over computation-focused methods. I hope other researchers in the field will follow suit and adopt this general methodology to neural-network-model architecture design.”

The Drone Center’s Weekly Roundup: 7/17/17

The South Korean Navy conducted shipboard flight tests for the TR-60 tilt-rotor UAV. Source: KARI

July 10, 2017 – July 16, 2017

If you would like to receive the Weekly Roundup in your inbox, please subscribe at the bottom of the page.

News

A U.S. drone strike in Afghanistan killed Abu Sayed, the leader of the local ISIS cell. In a statement, a Pentagon spokesperson said that the strike targeted the ISIS headquarters in Kunar Province. Abu Sayed’s predecessor, Abdul Hasib, was killed in a special forces raid in April. (New York Times)

A Turkish drone strike in Anatolia reportedly killed five suspected members of the Kurdistan Workers’ Party. It was the first strike carried out with the Anka-S, a new Turkish surveillance and strike drone. (IHS Jane’s Defense Weekly)

Portugal will implement a new law regulating drone operations after a recent spike in reports of close encounters between drones and manned aircraft. Drone users will be required to register and purchase insurance. Speaking with members of a parliamentary committee, Infrastructure Minister Pedro Marques said that he hopes the law will be in place by the end of the month. (Associated Press)

Commentary, Analysis, and Art

At The Digital Circuit, Scott Simmie looks at how hackers are selling kits to disable the geofencing in drones made by DJI.

At the Hill, Paul Scharre argues that the U.S. should start treating drone exports the same way it treats exports of manned aircraft.

At Popular Mechanics, David Hambling looks at the technology underpinning a U.S. drone designed for research missions in the Arctic.

At War on the Rocks, Ben Brewster argues that the U.S. Marine Corps needs a long-endurance surveillance and strike drone like the Reaper or Gray Eagle.

In a speech in Washington, Gen. Mike Holmes argued that the U.S. Air Force needs greater authority to protect its facilities and aircraft against drones. (FlightGlobal)

London’s Gatwick Airport released a video showing how a drone can interrupt air traffic around an airport. (Motherboard)

At Truthout, Alex Edney-Browne looks at how on-the-ground research can shed more light on the effects of U.S. drone strikes.

At Vertical Magazine, Oliver Johnson looks at how a Canadian company is using drones to help fight wildfires.

At CBC, Dean Beeby writes that Transport Canada’s Arctic drone program was delayed because of international arms control regulations.

At Popular Science, Kelsey D. Atherton looks at how Amazon’s drone hub concepts pose big challenges for urban planners.

A drone video captured the extent of the destruction of Mosul, Iraq. (Associated Press)

Know Your Drone

A team of universities, research institutes, and broadcasters is looking to develop swarms of three to five video drones that can be used to film large sporting events. (Horizon)

Serbia’s Military Technical Institute has unveiled a prototype unmanned ground vehicle that is armed with a machine gun and a grenade launcher. (IHS Jane’s 360)

The Korea Aerospace Research Institute conducted flight trials of its TR-60 tiltrotor drone from a moving ship. (IHS Jane’s 360)

NASA has developed onboard software for drones called Safeguard that forces them to land if they come too close to a no-fly zone. (Wired)

Walmart has announced that it will conduct tests for its delivery drone program at an airport in Upstate New York. (New York Upstate)

Drone maker AirDog unveiled the ADII, a multirotor consumer drone optimized to record sports events and other outdoor activities. (The Drive)

In a test, a team at Johns Hopkins University successfully transported a medical sample under temperature control over 160 miles by drone. (SUAS News)

A consortium of three European countries is looking to develop an unmanned ship for use in the oil and gas sector. (Construction.Ru)

Israeli company Flytrex is seeking to use sophisticated air traffic management software to enable drone deliveries. (The Drive)

A Florida-based company has developed a simulated IED-equipped drone for training soldiers. (C4ISRNET)

CASC conducted a flight test of the new CH-5 Rainbow drone at an airport in northeast China. (East Pendulum)

Drones at Work

Drone maker Flyability announced that its collision-resistant Elios drone has been used to inspect the interior of a nuclear reactor building. (Press Release)

The Australian Transport Safety Bureau is investigating a possible collision between a drone and a light aircraft near Adelaide. Nobody was injured in the incident. (ABC) For more on Close Encounters, click here.

Meanwhile, the U.S. Air Force is looking to obtain authority to shoot down small drones over its facilities after a recent close encounter between a F-22 Raptor jet and a drone. (Aviation Week)

Researchers in Boulder, Colorado used drones to survey damage caused to local trees by invasive green jewel beetles. (Daily Camera)

South Korea is looking to deploy a new system to detect North Korean surveillance drones that fly into its territory. (UPI)

A Drone Racing League RacerX quadcopter drone set a new record for the fastest civilian drone, hitting a top speed of 179.3 mph. (CNET)

Researchers at Northwestern University are developing small ground robots equipped with whiskers that allow them to precisely detect their surroundings. (Wired)

Police in Devon, Cornwall, and Dorset in England are launching the country’s first police unit dedicated to using drones. (AP)

Singapore’s upcoming National Day Parade will feature a light show of 300 drones. (The Straits Times)

Police in DeSoto County, Mississippi used a drone to help investigators search for debris from a recent military plane crash in the area. (Fox13)

North Dakota authorities dismissed criminal charges against a man who was accused of using a drone to stalk a group of private security workers at the Dakota Access Pipeline protests last year. (Bismarck Tribune)

The British Army will no longer use the Black Hornet, a micro drone made by Norway’s Prox Dynamics. (IHS Jane’s Defense Weekly)

Industry Intel

The U.S. Office of Naval Research awarded Embry-Riddle University a $900,000 grant to develop advanced communication systems for unmanned surface vehicles. (Press Release)

The National Science Foundation awarded Bakman Technologies a grant to develop a sensor for a drone to monitor emissions that contribute to global warming. (Drone Life)

Colorado’s El Paso County awarded Sanborn Map Company a multi-year contract to provide drones for a variety of missions, including disaster response and construction site monitoring. (Press Release)

The U.S. Coast Guard awarded Riptide Autonomous Solutions a $69,374 contract for an unmanned underwater vehicle. (FBO)

The French Navy awarded ECA Group a contract for unmanned undersea vehicles for mine detection and disposal. (Press Release)  

Germany’s Bundeswehr awarded EMT a $71 million contract for three LUNA NG systems. (IHS Jane’s Defense Weekly)

Thales is reportedly marketing the Watchkeeper surveillance drone to the Indonesian Air Force. (IHS Jane’s Defense Weekly)

The U.S. Department of Defense awarded Autonomous Solutions a contract to apply machine learning and artificial intelligence to autonomous vehicles operating in challenging environments. (Press Release)

CybAero Ab, a Swedish company that makes rotary drones, signed a debt-refinancing deal with Bracknor, a Dubai-based investment firm. (DefenseNews)

For updates, news, and commentary, follow us on Twitter.

 

News and commentary from AUVSI/TRB Automated Vehicle Symposium 2017

In San Francisco, I’m just back from the annual Automated Vehicle Symposium, co-hosted by the AUVSI (a commercial unmanned vehicle organization) and the Transportation Research Board, a government/academic research organization. It’s an odd mix of business and research, but also the oldest self-driving car conference. I’ve been at every one, from the tiny one with perhaps 100-200 people to this one with 1,400 that fills a large ballroom.

Toyota Research VC Fund

Tuesday morning did not offer too many surprises. The first was an announcement by Toyota Research Institute of a $100M venture fund. Toyota committed $1B to this group a couple of years ago, but surprisingly Gil Pratt (who ran the DARPA Robotics Challenge for humanoid-like robots) has been somewhat a man of mixed views, with less optimistic forecasts.

Different about this VC fund will be the use of DARPA like “calls.” The fund will declare, “Toyota would really like to see startups solving problem X” and then startups will apply, and a couple will be funded. It will be interesting to see how that pans out.

Nissan’s control room is close to live

At CES, Nissan showed off their plan to have a remote control room to help robocars get out of sticky situations they can’t understand like unusual construction zones or police directing traffic. Here, they showed it as further along and suggested it will go into operation soon.

This idea has been around for a while (Nissan based it on some NASA research) and at Starship, it has always been our plan for our delivery robots. Others are building such centers as well. The key question is how often robocars need to use the human assistance, and how you make sure that unmanned vehicles stay in regions where they can get a data connection through which to get help. As long as interventions are rare, the cost is quite reasonable for a larger fleet.

This answers the question that Rod Brooks (of Rethink Robotics and iRobot) recently asked, pondering how robocars will handle his street in Cambridge, where strange things like trucks blocking the road to do deliveries, are frequently found.

It’s a pretty good bet that almost all our urban spaces will have data connectivity in the 2020s. If any street doesn’t have solid data, and has frequent bizarre problems of any type, yet is really important for traversal by unmanned vehicles — an unlikely trifecta — it’s quite reasonable for vehicle operators to install local connectivity (with wifi, for example) on that street if they can’t wait for the mobile data companies to do it. Otherwise, don’t go down such streets in empty cars unless you are doing a pickup/drop-off on the street.

Switching Cities

Karl Iagenemma of nuTonomy told the story of moving their cars from Singapore, where driving is very regulated and done on the left, to Boston where it is chaotic and done on the right.

The short summary is that the switch was harder than they expected. At the same time, I feel that if a small company like nuTonomy can do it, it is not a big burden globally. That’s important because it reflects on the question of whether we need one single set of regulations across the United States or Europe, or if it’s better to have a patchwork with jurisdictional competition allowing innovation in how vehicles are regulated.

How much testing do vehicles need?

Nidi Kalra of Rand spoke about their research suggesting that testing robocars is an almost impossible task, because it would take hundreds of millions to a billion miles of driving to prove that a robocar is 10% better than human drivers.

This paper was published last year but I didn’t comment on it. I will post a more detailed commentary on it (and the reaction to it) shortly.

Security

Jonathan Petit presented interesting results previously at this meeting about his attacks on LIDARS. Tired of holding breakout workshops on security and nothing happening, he decided instead to just challenge the audience to E-mail him and others about their security concerns and plans. With 1,400 attendees, he got 4 responses. This crowd at least, is not taking security properly. Of course, only some portion of the room are actual developers of robocars. Most are researchers and academics and non-engineers. Still, the result is disappointing.

Accidents

The breakout on “what happens after an accident” day #1 was off-the-record, but a few general observations:

The police representative didn’t think there would be major changes in police investigations. They don’t seem to think the full 3-D recordings of the accidents in the cars will be easy to get their hands on so they’ll go about things the same way as before.
Trial lawyers argued about whether the standard of strict liability — pay if you caused the accident — rather than payment only when negligence is found, might become the norm.
Automakers are torn on this issue. On the one hand, who wants to pay if you weren’t negligent? On the other hand, it is only with negligence findings that high liability can be found. The cost of discovering the reason for a robocar’s error will be very high, with detailed code examination, and deposition of all programmers and expert witnesses. It may be simpler to pay every time than to have complex and costly lawsuits half the time, if you seriously reduce the number of accidents.

On the other hand, many states have liability caps on accidents which would preclude cases getting very expensive. If the max payout is $300,000 you aren’t going to spend a million trying to get it, and you have no reason to refuse a settlement near that cap number.

Infrastructure

The AVS has a lot of governmental people, and they’re all very keen to imagine their role, which they see as making the infrastructure “ready” for robocars. There was a whole long session on the topic, and many people who imagine there is a lot to do.

This is the wrong impression. Robocars are being designed to handle the infrastructure we already have, and only low-skill robocar makers are suggesting we need to make significant changes to the infrastructure to enable these vehicles.

For example, some automakers, making very basic camera based lanekeeping systems which find the lanemarkers using various algorithms have complained that they are poor quality on many roads. But Google, who actually got cars on the road first, designed their system to not require any quality from the lane markers. In fact, Google’s cars only need lane markers to be sure the humans know where to drive, and to know where to put the lanes in their internal maps. (They do want lane markers if the road has had new construction and has changed from the maps.)

Google’s algorithms actually prefer badly painted lane markers, because they find their location by matching the texture of the road, which includes holes in lane markers, road repairs and many other factors. It’s not a human way of driving, of course, but it doesn’t expect the road to change to suit the car.

For almost any proposal I have seen for how we might make infrastructure “robocar ready” there is a far cheaper and faster-to-develop solution that involves having the cars get smarter. Infrastructure change is only needed if there is a compelling case for why a fix in software can’t be found, or a case for why it can’t be done in virtual infrastructure.

Indeed, almost all the activity of infrastructure maintainers should focus on maintaining the virtual infrastructure instead. They should work to make sure roads are changed without logging it in a database, that road signs are all logged in databases and new ones don’t go into force until logged in the databases. Such logging isn’t hard — it’s as simple as a mobile app on the phones of the crews who install new signs or make other changes, and strong rules requiring use of the app. For example, severe financial penalties for not logging changes in the app.

I continue to advance the proposition that “you don’t change the roads to suit your cars, you improve your cars to deal with the roads we have.” At least for the near future.

Sharks

It’s no surprise my favourite session was one I spoke at, the Shark Tank. We saw 4 proposals on how robocars will change the world, and we sharks got to debate the issues around them with the audience. I didn’t just like the session because of my own participation as a shark. Unlike many sessions it also had a lot of audience involvement. The 4 propositions we tackled were:

  • Congestion will go away
  • Transportation agencies will shrink, and so will transit agencies
  • Trucking will be quickly revolutionized
  • Car ownership will end

Surprisingly, the proposal from the libertarian Reason magazine representative for the withering of these agencies got almost no opposition. While I have felt this is likely myself, I did not expect a room of others to agree. There was much more dispute around congestion. More were skeptical of my proposals that we might meter trips with smartphones to reduce congestion than I had hoped.

International

The event closed with a summary of various international efforts. This matched my impression from the recent conference in Germany — in most of the rest of the world, government involvement is quite high, but also highly non-productive. The budget size of many of the EU and Japanese funded projects for example, far exceeded the budget of Google’s early efforts, yet Google produced an impressive car while the EU projects produced only minor results.

Particularly popular all over the whole world these days is the forming of consortia and alliances which sound impressive but accomplish very little. I can’t escape the feeling when I see the announcement of a new alliance or partnership which does not actually say what concrete thing will be done by the alliance that it’s mostly for show, and not for real work.

Going forward

This conference began as the only self-driving conference and has grown. The problem with the growth is that most of the audience is new to the conference and the field. This pushes the sessions to be “dumbed down” with too much introductory background. While I am more informed than the average attendee, and will never get the perfect conference for me, I would like to see the sessions focus more on truly new things, things that are surprising. Companies who present have been told not to do marketing pitches or old news, the same has to apply to academics. This is challenging because academics spend a lot of time doing rigourous verification of things that are obvious. That’s a worthwhile task, but not right for the main stage of a joint industry/academic event. That’s why I liked the shark tank — it had a focus on issues about which informed people disagreed. That guarantees that much of the audience will be surprised by what they learn.

Coming up this week: Testing durations, and a new satire on the NHTSA levels and other regulation.

Hand-wringing hides the fact that Mexico is employing more, and fewer are coming to work in the U.S.

The Association for Advancing Automation (A3) cites that between 2010 and 2016, 136,748 robots were shipped to the US —the most in any seven-year period in the US robotics industry. At the same time, US manufacturing employment increased by 894,000 and the unemployment rate fell from 9.8% to 4.7%.

Yet manufacturers, robotics associations, ethicists and media pundits are still fighting the robotics and jobs issue. Brett Brune, Editor in Chief of Smart Manufacturing magazine, argues that “the hand-wringing around robotics and jobs in the US really needs to stop.” 

Manufacturers around the world, including in China, are busy figuring out how quickly to acquire robots. In Mexico, automation is thriving. So much so that the country is now the sixth-biggest auto producer globally. At the same time, the number of manufacturing workers is starting to swell.

A record-high 5.15 million Mexicans worked in manufacturing as of May, nearly a quarter of all workers registered with the country’s social security institute. Around 202,000 Mexicans joined the ranks of manufacturing workers during the first five months of this year alone.

California, which abuts Mexico, has America's largest Hispanic population (14.4 million in 2011), and has had a continuous supply of migrant farm workers since before statehood, has been an agricultural mainstay in the US for close to 100 years and currently produces about 60% of the nation's fresh produce. But as the state's minimum wage approaches $15 and competition from the growing Mexican manufacturing economy mounts, farm managers are having to cope with a workforce that has dropped and immigration policies that threaten to depress the labor supply further.

Since the late 1990s, the number of agricultural workers who move around the US working seasonal farming jobs fell by 60%. According to a study done by the Institute for Research on Labor and Employment (UC Berkeley), half of that labor transformation appears to be due to changes in the demographic makeup of the workforce while government and institutional changes in the market impacted the remaining 50%.

“This reduction in the number of migrant farmworkers increases the risk that fruits and vegetables will not be harvested before they spoil. To avoid this problem, farmers will switch crops, automate planting and harvesting, or take other actions to reduce the need for seasonal agricultural workers. Only a major change in our immigration and guest worker policies is likely to increase migration within the country and postpone automation.”

Interestingly, the trend line for documented workers, although going steadily down, outnumbers undocumented workers as governmental and economic changes in the US and Mexico make immigration less attractive plus demographic changes make farm workers less willing to migrate.

The case for robotics in the ag industry has never been stronger as agricultural migration rates within the United States plummet. Recent research reports suggest that growth in ag technology will be exponential but not necessarily in across-the-board robotics; rather, it will first attack three areas: (1) converting farms to become connected digital systems through the use of precision ag methods, (2) then, with sensor-laden ground-based and airborne drones, planes and satellites gathering and accumulating data, analyzing that data to produce actionable prescriptions, and (3) provide equipment that can vary its procedures (seeding, thinning, weeding, spraying, etc.) based on the actionable prescriptions from #2, and automate post-harvest processing (sorting, inspecting, handling, packaging, boxing, etc.). 

Resources and references

Read more

Swarms of smart drones to revolutionise how we watch sports

Researchers are looking for ways to connect drones together in swarms to capture sports events. Image credit — Flickr/ Ville Hyvönen

by Joe Dodgshun

Anyone who has watched coverage of a festival or sports event in the last few years will probably have witnessed commercial drone use — in the form of breathtaking aerial footage. But a collaboration of universities, research institutes and broadcasters is looking to take this to the next level by using a small swarm of intelligent drones.

The EU-funded MULTIDRONE project seeks to create teams of three to five semi-automated drones that can react to and capture unfolding action at large-scale sports events. Project coordinator Professor Ioannis Pitas, of the University of Bristol, UK, says the collaboration aims to have prototypes ready for testing by its media partners Deutsche Welle and Rai – Radiotelevisione Italiana within 18 months.

‘Deutsche Welle has two potential uses lined up – filming the Rund um Wannsee boat race in Berlin, Germany, and also filming football matches with drones instead of normal cameras – while Rai is interested in covering cycling races,’ said Prof. Pitas.

‘We think we have the potential to offer a much better film experience at a reduced cost compared to helicopters or single drones, producing a new genre in drone cinematography. We have the potential to offer a much better film experience at a reduced cost compared to helicopters or single drones, producing a new genre in drone cinematography.’

But before they can chase the leader of the Tour de France, MULTIDRONE faces the hefty challenge of creating AI that allows its drones to safely carry out a mission as a team. Prof. Pitas says safety is the utmost priority, so the drones will include advanced crowd avoidance mechanisms and the ability to make emergency landings.

And it’s not just safety in the case of bad weather, a flat battery or a rogue football. ‘Security of communications is important as a drone could otherwise be hijacked, not just undermining privacy but also raising the possibility that it could be used as a weapon,’ said Prof. Pitas.

The early project phase will have a strong focus on ethics to prevent any issues around privacy.

‘People are sensitive about drones and about being filmed and we’re approaching this in three ways — trying to avoid shooting over private spaces, getting consent from the athletes being followed, and creating mechanisms that decide which persons to follow and blur other faces.’

If they can pull it off, he predicts a huge boost for the European entertainment industry and believes it could lead to much larger drone swarms capable of covering city-wide events.

Drones-on-demand

According to Gartner research, sales of commercial-use drones are set to jump from 110 000 units in 2016 to 174 000 this year. Although 2 million toy drones were snapped up last year for USD 1.7 billion, the commercial market dwarfed this at USD 2.8 billion.

Aside from pure footage, drones have also proven their worth in research, disaster response, construction and even in monitoring industrial assets. One company trying to open up the market to those needing a sky-high helping hand is Integra Aerial Services, a young drones-as-a-service company. An offshoot of Danish aeronautics firm Integra Holding Group, INAS, was launched in 2014 thanks to an EU-backed feasibility study.

INAS has more than 25 years of experience in aviation and used its knowledge of the sector’s legislation to shape a business model targeting heavier, more versatile drones weighing up to 25 kilogrammes. And they have already been granted a commercial drone operating license by the Danish Civil Aviation Authority.

These bigger drones have far more endurance than typical toy drones, which can weigh anywhere from 250 grams to several kilos. INAS CEO Gilles Fartek says their bigger size means they can carry multiple sensors, thus collecting all the needed data in one fell swoop, instead of across multiple flights. For example, one of their drones flies LIDAR (Light Detection and Ranging) radar over Greenland to measure ice thickness as a measure of climate change, but could also carry a 100 megapixel, high-definition camera.

While INAS spends most of the Arctic summer running experiments from the remote host Station Nord in Greenland, Fartek says they’re free to use the drones for different projects in other seasons, mostly in areas of environmental research, mapping and agricultural monitoring.

‘You can’t match the quality of data for the price, but drone-use regulations in Europe are still quite complicated and make between-country operations almost impossible,’ said Fartek. ‘The paradox is that you have an increasing demand for such civil applications across Europe and even in institutional areas like civil protection and maritime safety where they cannot use military drones.’

A single European sky

These issues, and more, should soon be addressed by SESAR, the project which coordinates all EU research and development activities in air traffic management. SESAR plans to deploy a harmonised approach to European airspace management by 2030 in order to meet a predicted leap in air traffic.

Recently SESAR unveiled its blueprint outlining how it plans to make drone use in low-level airspace safe, secure and environmentally friendly. They hope this plan will be ready by 2019, paving the way for an EU drone services market by safely integrating highly automated or autonomous drones into low-level airspace of up to 150 metres.

Modelled after manned aviation traffic management, the plan will include registration of drones and operators, provide information for autonomous drone flights and introduce geo-fencing to limit areas where drones can fly.

Read More

Asimov’s Laws won’t stop robots harming humans so we’ve developed a better solution

By Christoph Salge, Marie Curie Global Fellow, University of Hertfordshire

How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.

Science fiction already envisioned this problem and has suggested various potential solutions. The most famous was author Isaac Asimov’s Three Laws of Robotics, which are designed to prevent robots harming humans. But since 2005, my colleagues and I at the University of Hertfordshire, have been working on an idea that could be an alternative.

Instead of laws to restrict robot behaviour, we think robots should be empowered to maximise the possible ways they can act so they can pick the best solution for any given scenario. As we describe in a new paper in Frontiers, this principle could form the basis of a new set of universal guidelines for robots to keep humans as safe as possible.

The Three Laws

Asimov’s Three Laws are as follows:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

While these laws sound plausible, numerous arguments have demonstrated why they are inadequate. Asimov’s own stories are arguably a deconstruction of the laws, showing how they repeatedly fail in different situations. Most attempts to draft new guidelines follow a similar principle to create safe, compliant and robust robots.

One problem with any explicitly formulated robot guidelines is the need to translate them into a format that robots can work with. Understanding the full range of human language and the experience it represents is a very hard job for a robot. Broad behavioural goals, such as preventing harm to humans or protecting a robot’s existence, can mean different things in different contexts. Sticking to the rules might end up leaving a robot helpless to act as its creators might hope.

Our alternative concept, empowerment, stands for the opposite of helplessness. Being empowered means having the ability to affect a situation and being aware that you can. We have been developing ways to translate this social concept into a quantifiable and operational technical language. This would endow robots with the drive to keep their options open and act in a way that increases their influence on the world.

When we tried simulating how robots would use the empowerment principle in various scenarios, we found they would often act in surprisingly “natural” ways. It typically only requires them to model how the real world works but doesn’t need any specialised artificial intelligence programming designed to deal with the particular scenario.

But to keep people safe, the robots need to try to maintain or improve human empowerment as well as their own. This essentially means being protective and supportive. Opening a locked door for someone would increase their empowerment. Restraining them would result in a short-term loss of empowerment. And significantly hurting them could remove their empowerment altogether. At the same time, the robot has to try to maintain its own empowerment, for example by ensuring it has enough power to operate and it does not get stuck or damaged.

Robots could adapt to new situations

Using this general principle rather than predefined rules of behaviour would allow the robot to take account of the context and evaluate scenarios no one has previously envisaged. For example, instead of always following the rule “don’t push humans”, a robot would generally avoid pushing them but still be able to push them out of the way of a falling object. The human might still be harmed but less so than if the robot didn’t push them.

In the film I, Robot, based on several Asimov stories, robots create an oppressive state that is supposed to minimise the overall harm to humans by keeping them confined and “protected”. But our principle would avoid such a scenario because it would mean a loss of human empowerment.

The ConversationWhile empowerment provides a new way of thinking about safe robot behaviour, we still have much work to do on scaling up its efficiency so it can easily be deployed on any robot and translate to good and safe behaviour in all respects. This poses a very difficult challenge. But we firmly believe empowerment can lead us towards a practical solution to the ongoing and highly debated problem of how to rein in robots’ behaviour, and how to keep robots -– in the most naive sense -– “ethical”.

This article was originally published on The Conversation. Read the original article.

Singapore: An autonomous innovation center

Jim Robinson of RRE Ventures said it best last month at the Silicon Dragon Conference when comparing Silicon Valley to New York, “There are two kinds of centers that have a lot of startups and technology, there are technology centers and commerce centers.” New York falls into the later category, while the Valley is the former. Sitting next to Jim, I reflected that Singapore might be in both groups, an Asian commerce hub and a leader in mechatronics. As an advocate for automation, I am often disheartened that the United States significantly lags behind its industrial counterparts in manufacturing autonomous machines. The key to a pro-job policy could be gleaning from the successes of countries like Singapore to implement America’s own ‘Robot First Plan.’

robotic chart.pngLast August, Singapore became the first country to permit autonomous taxis on its roads. Boston-based startup nuTonomy moved its operations to the Far East, which enabled the company to launch public trials weeks before Uber’s test in Pittsburgh. The secret to the company’s speed to market was to skip America’s 19,492 municipal government licensing departments to pilot its technology years before any of the other competing technologies, with the exception of Google’s Waymo. In addition to less regulatory hoops, the Singapore Land Transport Authority has partnered with nuTonomy on its rollout.

Pang Kin Keong, Singapore’s Permanent Secretary for Transport and the Chairman of its Committee on autonomous driving, said “We face constraints in land and manpower. We want to take advantage of self-driving technology to overcome such constraints, and in particular to introduce new mobility concepts which could bring about transformational improvements to public transport in Singapore.”

The company is on track to offer its driverless taxis throughout the country by 2018. Doug Park’s, nuTonomy’s COO, estimates that autonomous taxis could ultimately reduce the number of cars on Singapore’s roads from 900,000 to 300,000. Park explains, “When you are able to take that many cars off the road, it creates a lot of possibilities. You can create smaller roads, you can create much smaller car parks. I think it will change how people interact with the city going forward.” Park’s partnership with the city-state is made possible because Singapore is not straddled with the costs of aging infrastructure like many US and European cities.

Since first announcing its test in 2016, nuTonomy is on pace to expand globally with its recent partnership with ride-sharing company Lyft in a pilot in Boston. Karl Lagnemma, nuTonomy’s CEO, said: “By combining forces with Lyft in the U.S., we’ll be positioned to build the best passenger experience for self-driving cars. Both companies care immensely about solving urban transportation issues and the future of our cities, and we look forward to working with Lyft as we continue to improve our autonomous vehicle software system.”

Besides autonomous vehicles, drones have been widely embraced by Singapore’s infamously strict police department. Singaporean startup H3 Dynamics became the first company last year to launch a drone in the box solution that offers storage and charging stations in the field. H3’s “DRONEBOX” is a unique solar-based charging station that enables longer autonomous missions in areas that are typically hostile for humans. Since showcasing its technology above the streets of Singapore, H3 faces increased competition from formidable upstarts, including: Airobotics, EasyAriel, and HiveUAV.

According to its original press release, “DRONEBOX is an all-inclusive, self-powered system that can be deployed anywhere, including in remote areas where industrial assets, borders, or sensitive installations require constant monitoring. Designed as an evolution over today’s many unattended sensors and CCTV cameras installed in cities, borders, or large industrial estates, DRONEBOX innovates by giving sensors freedom of movement using drones as their vehicles. End-users can now deploy flying sensor systems at different locations, and measure just about anything, anywhere, anytime. They offer 24/7 reactivity, providing critical information to operators – even to those located thousands of miles away.”

Screen Shot 2017-07-14 at 11.09.03 AM.png

In June, Dynamic H3 announced the opening of new drone operation centers to include Europe, America and the Middle East. Additionaly, H3 is now marketing its next generation of battery technology for extended high-value asset missions. H3’s HES Energy Systems is the product a decade-long research initiative between the company and the Singaporean government. Unlike typical drone lithium batteries that have a flight-time of 20-40 minutes, HES Energy’s developed its ground-breaking 6 hour battery (above) with a first of its kind “solid-hydrogen on demand powered system.” The combination of longer flights, self-charging stations and autonomous missions is a powerful value proposition for this Singaporean offering in differentiating itself in an already crowded unmanned flight market.

This past week, Dubai announced its plans to roll out a fleet of mini autonomous police cars for surveillance and crime prevention. This effort is part of the Middle Eastern city’s program to automate 25% of its police force over the next decade. The Gulf State’s ambitious plans were a perfect fit for Singaporean OTSAW Digital, a division of ActiV-a global tech powerhouse. Similar to nuTonomy and H3, OTSAW’s O-R3 grew out of the innovation friendly environment of the Asian republic.

OR-3 is smaller than a golf cart, and not meant actually to capture nefarious actors, but to identify suspicious activity while it is happening. Using facial recognition technology and a built-in ariel drone, the robot will begin patrolling the Dubai beat by the end of the year. The autonomous car/drone combo is almost a hybrid of nuTonomy and H3, with an array of sensors and machine intelligence technologies to survey the area via thermal imaging, license plate readers and cloud-based computing.

robot-cop-dubai-police-robocop

According to Abdullah Khalifa Al Marri, the head of the Dubai police department, the OR-3 isn’t meant to replace officers but rather to augment their skills with more efficient resources. “We seek to augment operations with the help of technology such as robots. Essentially, we aim for streets to be safe and peaceful without heavy police patrol,” said Al Marri. Last month, Dubai even deployed a humanoid-looking robot to monitor tourist attractions, dubbed Robocop, that speaks English and Arabic. According to Dubai, it plans to deploy larger humanoids that stand over 10 feet tall and go over 50 mph, while the human controller operates the device inside the robots.

Brigadier-General Khalid Nasser Al Razzouqi, Director-General of Smart Services at Dubai Police’s department, boasted, “The launch of the world’s first operational Robocop is a significant milestone for the Emirate and a step towards realizing Dubai’s vision to be a global leader in smart cities technology adoption.” In 2015, The World Economic Forum ranked the United Arab Emirates as the second most tech savvy government in the world, just behind Singapore.

sig tech ready

As I write this article I find myself at another conference, “The State of New York: Smart Cities.” Hoping for insights about how my city will compete with the likes of these tech savvy counterparts, I was met by a group of bureaucrats touting App Competitions and Free WiFi. One speaker even suggested that the Metropolitan Transportation Authority (MTA) is run by the best executive team, even though New York’s Governor has politely called the organization dysfunctional due to multiple train derailments, signal problems, and overcrowded stations (see: The Summer of Hell). Science is not just about the possible, but the willing. Singapore’s ability to reinvent the very nature of how a city operates and partners with the private sector is proof positive that even a country founded 50 years ago can climb to the “top of the heap.”

 

Lakeside Research Days: Swarming in cyber physical systems

Foto by www.Lakeside-Labs.com

An interdisciplinary workshop on self-organization and swarm intelligence in cyber physical systems was held at Lakeside Labs this week. Experts presented their work and discussed open issues in this exciting field.

“Our crazyswarm is the largest indoor drone swarm that I’m aware of,” Nora Ayanian states. The assistant professor from the Viterbi School of Engineering at the University of Southern California (USC) in Los Angeles was recently described by MIT Technology Review to be one of “35 innovators under 35.” She came to Klagenfurt to expound her latest results on multirobot coordination. She advocates for taking the perspective of a robot when designing coordination algorithms. Her team thus programmed a multiplayer computer game in which people have to form a certain spatial pattern by moving around, but they are constrained by the limited view of a robot and are not allowed to use any explicit communication. She expects new insights from this game for the development of a human-inspired approach for robot coordination.

The second keynote was given by Gianni Di Caro from Carnegie Mellon University. His talk emphasized latest advances in wearable interfaces for multi-modal interaction between humans and robot swarms. His philosophy is that most computations, such as decoding and fusion of vocal and gestural commands, are not done in the robots but in the wearable devices. Gianni was also a guest in Klagenfurt in 2013. “I enjoyed the Lakeside Research Days so much, that’s why I came back,” he says.

Two additional highlights were the presentations by Johannes Gerstmayr, professor for machine elements and design at the University of Innsbruck, and Thomas Schmickl, a professor for biology at the University of Graz. Gerstmayr introduced his adaptive tetrahedal elements, which can be put together to form reconfigurable robots and programmable matter. It is fascinating to imagine that these elements have the potential to enable reconfigurable furniture and art objects. Schmickl explained that entities in biological swarms might not need any explicit communication in order to collaborate. He also explained that the behavior of honeybees is quite diverse, and that only seven percent of honeybees are “goal finders” having the capability to systematically find a certain target, such as heat spots. According to Schmickl, the most promising application of swarm robotics is the search for extraterrestrial life forms. He currently develops swarms of small underwater robots for this purpose, which will soon be tested in the Adriatic Sea.

About 30 people attended the 2017 Lakeside Research Days, which was held in collaboration with the University of Klagenfurt from July 10 – 12, 2017. Emphasis was on scientific interaction and group work. The participants discussed, for example, the differences between swarms and controlled systems and concluded that swarms are especially useful in unknown and changing environments. The program also included laboratory sessions with training on micro-robots and talks from the Horizon 2020 project CPSwarm.

Keynotes were sponsored by the Lakeside Science & Technology Park GmbH, Infineon Technologies Austria AG, KELAG, and the TeWi-Förderverein. Melanie Schranz, senior researcher at Lakeside Labs and one of the organizers, is very satisfied with the outcome of the workshop: “I gained a lot of inspiration from great people for my own research,” she concludes.

Further impressions about the event can be found at Twitter Moments and #resdays17.


Robot Launch 2017: Two weeks left to enter the competition!

The Robotics Hub, in collaboration with Silicon Valley Robotics, is looking to invest up to $500,000 in robotics, AI and sensor startups! Finalists also receive exposure on Robohub and space in the new Silicon Valley Robotics Cowork Space. Plus you get to pitch your startup to an audience of top VCs, investors and experts. Entries close August 31.

In previous Robot Launch competitions we’ve had hundreds of entries from more than 20 countries around the world. Our finalists have also reached the finals of major startup competitions like Tech Crunch Disrupt, and gone on to raise millions of dollars of funding making strong industry partnerships, such as working with Siemens Frontier Program.

Our semifinalists will also been featured on Robohub, which means they’ll reach an audience of approx 100,000 viewers. Everyone who enters gets incredibly valuable feedback from top robotics VCs, investors and experts.

CRITERIA: Your startup should be under 5 years old, with less than $2 million in funding. You should have a great new robotics technology and business model. Your startup is related to robotics, AI, simulation, sensors or autonomous vehicles. ENTER NOW.

Robot Launch is supported by Silicon Valley Robotics to help more robotics startups present their technology and business models to prominent investors. Silicon Valley Robotics is the not-for-profit industry group supporting innovation and commercialization in robotics technologies. The Robotics Hub is the first investor in advanced robotics and AI startups, helping to get from ‘zero to one’ with their network of robotics and market experts.

Please share this in your networks and let us know if you’d like to be a judge, mentor or can offer a prize for Robot Launch 2017. Just email Andra [andra @ robotlaunch.com].

Learn more about previous Robot Launch competitions here.

Page 2 of 4
1 2 3 4