Archive 31.08.2017

Page 1 of 3
1 2 3

New soft robots really suck: Vacuum-powered systems empower diverse capabilities

V-SPA
Recent advances in soft robotics have seen the development of soft pneumatic actuators (SPAs) to ensure that all parts of the robot are soft, including the functional parts. These SPAs have traditionally used increased pressure in parts of the actuator to initiate movement, but today a team from NCCR Robotics and RRL, EPFL publish a new kind of SPA, one that uses vacuum, in ScienceRobotics.

The new vacuum-powered Soft Pneumatic Actuator (V-SPA) is soft, lightweight and very easy to fabricate. By using foam and coating it with layers of silicone-rubber, the team have created an actuator that can be made using off the shelf parts without the need for molds – in fact, it takes just two hours to manufacture the V-SPA.

Once produced, the actuators were combined into plug-and-play “V-SPA Modules” which created a simplified design of soft pneumatic robots with many degrees of freedom. In fact, the team created reconfigurable, modular robots using these modules, where every function of the robot was powered by a single shared vacuum source, enabling many different types of capabilities, such as multiple forms of ground locomotion, vertical climbing, object manipulation and stiffness changing.

To test the new modular robot, the team added a suction arm which used the vacuum pump to pick up and move a series of objects, a task that was completed by turning on suction when an object should be carried and allowing the arm to refill with air when the object should be released. Further validation came through attaching suction cups to the robot and using it to climb up a vertical window and using the robot to walk using a number of different gaits (either through use of waves, like a snake, or rolling).

By creating a soft, lightweight actuator that can move in any direction the team hope to enable a new generation of truly soft, compliant robots that can interact safely with the humans that use them.

 

 

Reference

M. A. Robertson and J. Paik, “New soft robots really suck: vacuum powered systems empower diverse capabilities,” Science Robotics. doi/10.1126/scirobotics.aan6357

Industrial robots in China up, up and away!

China has rapidly become a global leader in robotics and automation. 2016 annual sales of industrial robots reached the highest level ever for any single country: 87,000 units (up 27% from 2015) and China’s stock of industrial robots is now, at 340,000 units, also the highest total in the world. while Chinese robot manufacturers increased their market share to 31% (up 120% from 2015).

The International Federation of Robotics (IFR), which provided these figures, is forecasting that “from 2018 to 2020, a sales increase between 15 and 20 percent on average per year is possible for industrial robots.” And these projections don’t include service robots for professional and B2B use, and personal use such as toys, drones, mobile gofers, guides, home assistants, and consumer products like robotic vacuums and floor and window cleaners.

Outlook for 2017

According to a report released by China Robot Industry Alliance (CRIA) at the big World Robot Conference in August in Beijing and reported by China Daily, China’s industrial robot market is expected to reach $4.22 billion in 2017 representing more than 110,000 new industrial robots.

At the same press conference, CRIA also reported that China’s service robot market will reach $1.32 billion this year, up 28% percent from 2015.

Outlook to 2020

The main drivers for the growth of the use of industrial robots in China are the electrical and electronics industry followed by general handling, welding and the auto industry. This broad and expanding demand is expected to continue as major contract manufacturers start and/or continue to automate their production. A further driving factor is China’s growing consumer market for all kinds of consumer goods.

According to the ten-year national plan “Made in China 2025,” the Chinese government wants to transform China from a low-cost labor-intensive manufacturing giant into a technology-based world manufacturing power. The plan includes strengthening Chinese robot suppliers and further increasing their market shares in China and abroad.

Shanzhai

Shenzhen is the Silicon Valley of technology and hardware for China. Things get made FAST. All kinds of ‘things.’ The can-make attitude in Shenzhen is being duplicated around China thus it is important to know what goes on, why it happens in Shenzhen, why it happens so fast, and what they think about patents, intellectual property and Western companies.

Another factor (driver) in China’s relentless push toward automation and robotics is this factoid: In 2016, China’s mobile payments hit $5.5 trillion, roughly 50 times the size of America’s $112 billion market, according to consulting firm iResearch. Chinese are adopting cashless and e-commerce methods at a rate significantly faster than the rest of the world.

WIRED Video produced an hour-long documentary describing the process, the people, and ‘Shanzhai,’ the evolving philosophy of copycat manufacturing, and attempts to put a positive spin on patent avoidance and what many Westerners call stealing, plus the speed of production for adequate profits (as opposed to massive profits). It is a worthwhile and very informative investment of an hour of your time.

New robot rolls with the rules of pedestrian conduct


by Jennifer Chu
Engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.
Credit: MIT

Just as drivers observe the rules of the road, most pedestrians follow certain social codes when navigating a hallway or a crowded thoroughfare: Keep to the right, pass on the left, maintain a respectable berth, and be ready to weave or change course to avoid oncoming obstacles while keeping up a steady walking pace.

Now engineers at MIT have designed an autonomous robot with “socially aware navigation,” that can keep pace with foot traffic while observing these general codes of pedestrian conduct.

In drive tests performed inside MIT’s Stata Center, the robot, which resembles a knee-high kiosk on wheels, successfully avoided collisions while keeping up with the average flow of pedestrians. The researchers have detailed their robotic design in a paper that they will present at the IEEE Conference on Intelligent Robots and Systems in September.

“Socially aware navigation is a central capability for mobile robots operating in environments that require frequent interactions with pedestrians,” says Yu Fan “Steven” Chen, who led the work as a former MIT graduate student and is the lead author of the study. “For instance, small robots could operate on sidewalks for package and food delivery. Similarly, personal mobility devices could transport people in large, crowded spaces, such as shopping malls, airports, and hospitals.”

Chen’s co-authors are graduate student Michael Everett, former postdoc Miao Liu, and Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics at MIT.

Social drive

In order for a robot to make its way autonomously through a heavily trafficked environment, it must solve four main challenges: localization (knowing where it is in the world), perception (recognizing its surroundings), motion planning (identifying the optimal path to a given destination), and control (physically executing its desired path).

Chen and his colleagues used standard approaches to solve the problems of localization and perception. For the latter, they outfitted the robot with off-the-shelf sensors, such as webcams, a depth sensor, and a high-resolution lidar sensor. For the problem of localization, they used open-source algorithms to map the robot’s environment and determine its position. To control the robot, they employed standard methods used to drive autonomous ground vehicles.

“The part of the field that we thought we needed to innovate on was motion planning,” Everett says. “Once you figure out where you are in the world, and know how to follow trajectories, which trajectories should you be following?”

That’s a tricky problem, particularly in pedestrian-heavy environments, where individual paths are often difficult to predict. As a solution, roboticists sometimes take a trajectory-based approach, in which they program a robot to compute an optimal path that accounts for everyone’s desired trajectories. These trajectories must be inferred from sensor data, because people don’t explicitly tell the robot where they are trying to go. 

“But this takes forever to compute. Your robot is just going to be parked, figuring out what to do next, and meanwhile the person’s already moved way past it before it decides ‘I should probably go to the right,’” Everett says. “So that approach is not very realistic, especially if you want to drive faster.”

Others have used faster, “reactive-based” approaches, in which a robot is programmed with a simple model, using geometry or physics, to quickly compute a path that avoids collisions.

The problem with reactive-based approaches, Everett says, is the unpredictability of human nature — people rarely stick to a straight, geometric path, but rather weave and wander, veering off to greet a friend or grab a coffee. In such an unpredictable environment, such robots tend to collide with people or look like they are being pushed around by avoiding people excessively.

 “The knock on robots in real situations is that they might be too cautious or aggressive,” Everett says. “People don’t find them to fit into the socially accepted rules, like giving people enough space or driving at acceptable speeds, and they get more in the way than they help.”

Training days

The team found a way around such limitations, enabling the robot to adapt to unpredictable pedestrian behavior while continuously moving with the flow and following typical social codes of pedestrian conduct.

They used reinforcement learning, a type of machine learning approach, in which they performed computer simulations to train a robot to take certain paths, given the speed and trajectory of other objects in the environment. The team also incorporated social norms into this offline training phase, in which they encouraged the robot in simulations to pass on the right, and penalized the robot when it passed on the left.

“We want it to be traveling naturally among people and not be intrusive,” Everett says. “We want it to be following the same rules as everyone else.”

The advantage to reinforcement learning is that the researchers can perform these training scenarios, which take extensive time and computing power, offline. Once the robot is trained in simulation, the researchers can program it to carry out the optimal paths, identified in the simulations, when the robot recognizes a similar scenario in the real world.

The researchers enabled the robot to assess its environment and adjust its path, every one-tenth of a second. In this way, the robot can continue rolling through a hallway at a typical walking speed of 1.2 meters per second, without pausing to reprogram its route.

“We’re not planning an entire path to the goal — it doesn’t make sense to do that anymore, especially if you’re assuming the world is changing,” Everett says. “We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again. This way, we think our robot looks more natural, and is anticipating what people are doing.”

Crowd control

Everett and his colleagues test-drove the robot in the busy, winding halls of MIT’s Stata Building, where the robot was able to drive autonomously for 20 minutes at a time. It rolled smoothly with the pedestrian flow, generally keeping to the right of hallways, occasionally passing people on the left, and avoiding any collisions.

“We wanted to bring it somewhere where people were doing their everyday things, going to class, getting food, and we showed we were pretty robust to all that,” Everett says. “One time there was even a tour group, and it perfectly avoided them.”

Everett says going forward, he plans to explore how robots might handle crowds in a pedestrian environment.

“Crowds have a different dynamic than individual people, and you may have to learn something totally different if you see five people walking together,” Everett says. “There may be a social rule of, ‘Don’t move through people, don’t split people up, treat them as one mass.’ That’s something we’re looking at in the future.”

This research was funded by Ford Motor Company.  

The need for robotics standards

Last week I was talking to one lead engineer of a Singapore company which is building a benchmarking system for robot solutions. Having seen my presentation at ROSCON2016 about robot benchmarking, he asked me how I would benchmark solutions that are non-ROS compatible. I said that I wouldn’t dedicate time to benchmark solutions that are not ROS-based. Instead, I suggested I would use the time to polish the ROS-based benchmarking and suggest that vendors adopt that middleware in their products.

Benchmarks are necessary and they need standards

Benchmarks are necessary to improve any field. By having a benchmark, different solutions to a single problem can be compared and hence a direction for improvement can be traced. Currently, robotics lacks such benchmarking system.

I strongly believe that to create a benchmark for robotics we need a standard at the level of programming.

By having a standard at the level of programming, manufacturers can build their own hardware solutions at will, as long as they are programmable with the programming standard. That is the approach taken by devices that can be plugged into a computer. Manufacturers create the product on their own terms, and then provide a Windows driver that allows any computer in the world (that runs Windows) to communicate with the product. Once this computer-to-product communication is made, you can create programs that compare the same type of devices from different manufacturers for performance, quality, noise, whatever your benchmark is trying to compare.

You see? Different types of devices, different types of hardware. But all of them can be compared through the same benchmarking system that relies on the Windows standard.

Software development for robots also needs standards

The need for standards is not only required for comparing solutions but also to speed robotics development. By having a robotics standard, developers can concentrate on building solutions that do not have to be re-implemented whenever the robot hardware changes. Actually, given the middleware structure, developers can disassociate enough from the hardware that they can almost spend 100% of their time in the software realm, while still developing code for robots.

We need the same type of standard for robotics. We need a kind of operating system that allows us to compare different robotics solutions. We need the Windows of the PCs, the Android of the phones, the CAN of the buses…

IMG_0154

A few standard proposals and a winner

But you already know that. I’m not the first one to state this. Actually, many people have already tried to create such a standard. Some examples include Player, ROS, YARP, OROCOS, Urbi, MIRA or JdE Robot, to name a few.

Personally, I actually don’t care which standard is used. It could be ROS, it could be YARP, or it could be any other that still has not been created. The only thing I really care about is that we  adopt a standard as soon as possible.

And it looks like the developers have decided. Robotics developers prefer ROS as their common middleware to program robots.

No other middleware for robotics has had such a large adoption. Some data about it:

ROS YARP OROCOS
Number of Google pages: 243.000 37.000 42.000
Number of citations if the paper describing the middleware: 3.638 463 563
Alexa ranking: 14.118Screenshot from 2017-08-24 19:50:39 1.505.000Screenshot from 2017-08-24 19:50:29 668.293Screenshot from 2017-08-24 19:50:19

Note 1: Only showing the current big three players.

Note 2: Very simple comparison. Difficult to compare in other terms since data is not available.

Note 3: Data measured in August 2017. May vary at the time you are reading this. Links provided on the numbers themselves, so you can check yourself.

This is not only the feeling that we, roboticists, have. The numbers also indicate that ROS is becoming the standard for robotics programming.

Screenshot from 2017-08-24 19:25:59

Why ROS?

The question is then, why has ROS emerged on top of all the other possible contestants. None of them is worst than ROS in terms of features. Actually you can find some feature in all the other middlewares that outperform ROS. If that is so, why or how has ROS achieved the status of becoming the standard ?

A simple answer from my point of view: excellent learning tutorials and debugging tools.

1

 

Here there is a video where Leila Takayama, early developer of ROS, explains when she realized that the key for having ROS used worldwide would be to provide tools that simplify the reuse of ROS code. None of the other projects have such a set of clear and structured tutorials. In addition, few of the other middlewares provide debugging tools for their packages. The lack of these two essential aspects is preventing new people from using their middlewares (even if I understand the developers of OROCOS and YARP for not providing it… who wants to write tutorials or build debugging tools… nobody! ? )

 

Additionally, it is not only about tutorials and debugging tools. ROS creators also provide a good system of managing packages. The result is that developers worldwide could use others packages in a (relatively) easy way. This created an explosion in ROS packages available, providing off-the-shelf almost anything for your brand new ROSified robot.

Now, the impressive rate at which contributions to the ROS ecosystem are made makes it almost unstoppable in terms of growing.

Screenshot from 2017-02-23 20:39:27

 

What about companies?

At the beginning, ROS was mostly used by students at Universities. However, as ROS becomes more mature and the number of packages increases, companies are realizing that adopting ROS is also good for them because they will be able to use code developed by others. On top of that, it will be easier for them to hire new engineers who already know the middleware (otherwise they would need to teach the newcomers their own middleware).

As a result, many companies have jumped onto the ROS train, developing from scratch their products to be ROS compatible. Examples include Robotnik, Fetch Robotics, Savioke, Pal Robotics, Yujin Robots, The Construct, Rapyuta Robotics, Erle Robotics, Shadow Robot or Clearpath, to name a few of the sponsors of the next ROSCON ? . Creating their ROS-compatible products, they decreased their development time by several orders of magnitude.

To bring things further, two Spanish companies have revolutionised the standardization of robotics products using ROS middleware: on one side, Robotnik has created the ROS Components shop. A shop where anyone can buy ROS compatible devices, starting from mobile bases to sensors or actuators. On the other side, Erle Robotics (now Acutronic Robotics) is in the process of developing Hardware ROS. The H-ROS is a standardized software and hardware infrastructure to easily create reusable and reconfigurable robot hardware parts. ROS is enabling hardware standarization too, but this time driven by companies, not research! That must mean something…

Screenshot from 2017-08-24 22:25:45

Finally, it looks like industrial robot manufacturers have understood the value that a standard can provide to their business. Even if they do not make their industrial robots ROS-enabled from the start, they are adopting ROS Industrial  a flavour of ROS, which allows them to ROSify their industrial robots and re-use all the software created for manipulators in the ROS ecosystem.

But are all companies jumping onto the ROS train? Not all of them!

Some companies like Jibo, Aldebaran or Google still do not rely on ROS for their robot programming. Some of them rely on their own middleware created before the existence of ROS  (that is the case of Aldebaran). Some others, though, are creating their own middleware from scratch. Their reasons: they do not believe ROS is good, they have already created a middleware, or do not want to develop their products dependent on the middleware of others. Those companies have very fair reasons to go their way. However, will that make them competitive? (if we have to judge from history, mobiles, VCRs, the answer may be no).

So is ROS the standard for programming robots?

That question is still too soon to be answered. It looks like it is becoming the standard, but many things can change. It is unlikely that another middleware takes the current title from ROS, but it may happen. There could be a new player that wipes ROS from the map (maybe Google will release its middleware to the public – like they did with Android – and take the sector by storm?).

Still, ROS has its problems, like a lack of security or the instability of some important packages. Even if the OSRF group are working hard to build a better ROS system (for instance ROS2 is in beta phase with many root improvements), some hard work is still required for some basic things (like the ROS controllers for real robots).

IMG_3330

Given those numbers, at The Construct we believe that ROS IS THE STANDARD (that is why we are devoted to creating the best ROS learning tutorials of the world). Actually, it was thanks to this standardization that two Barcelona students were able to create an autonomous robot product for coffee shops in only three months with zero knowledge of robotics (see Barista robot).

This is the future, and it is good. In this future, thanks to standards, almost anyone will be able to build, design and program their own robotics product, similar to how PC stores are building computers today.

So my advice, as I said to the Singapore engineer, is to bet on ROS. Right now, it is the best option for a robotics standard.

 

Long-term control of brain-computer interfaces by users with locked-in syndrome

Using Brain Computer Interfaces (BCI) as a way to give people with locked-in syndrome back reliable communication and control capabilities has long been a futuristic trope of medical dramas and sci-fi. A team from NCCR Robotics and CNBI, EPFL have recently published a paper detailing work as a step towards taking this technique into everyday lives of those affected by extreme paralysis.

BCIs measure brainwaves using sensors placed outside of the head. With careful training and calibration, these brainwaves can be used to understand the intention of the person they are recorded from. However, one of the challenges of using BCIs in everyday life is the variation in the BCI performance over time. This issue is particularly important for motor-restricted end-users, as they usually suffer from even higher fluctuations of their brain signals and resulting performance. One approach to tackle this issue is to use shared control approaches for BCI, which has so far been mostly based on predefined settings, providing a fixed level of assistance to the user.

The team tackled the issue of performance variation by developing a system capable of dynamically matching the user’s evolving capabilities with the appropriate level of assistance. The key element of this adaptive shared control framework is to incorporate the user’s brain state and signal reliability while the user is trying to deliver a BCI command.

The team tested their novel strategy with one person with incomplete locked-in syndrome, multiple times over the course of a year. The person was asked to imagine moving the right hand to trigger a “right command”, and the left hand for a “left command” to control an avatar in a computer game. They demonstrated how adaptive shared control can exploit an estimation of the BCI performance (in terms of command delivery time) to adjust online the level of assistance in a BCI game by regulating its speed. Remarkably, the results exhibited a stable performance over several months without recalibration of the BCI classifier or the performance estimator.

This work marks the first time that this design has been successfully tested with an end-user with incomplete locked-in syndrome and successfully replicates the results of earlier tests with able bodied subjects.

 

Reference:

S. Saeedi, R. Chavarriage and J. del R. Millán, “Long-Term Stable Control of Motor-Imagery BCI by a Locked-In User Through Adaptive Assistance,” IEEE Transactions on neural systems and rehabilitation engineering,” Vol. 25, no. 4, 380-391.

Udacity Robotics video series: Interview with Jillian Ogle from Let’s Robot


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Jillian Ogle, Founder and CEO of Let’s Robot. Jillian is a video game designer turned roboticist attempting to combine games in robotics in a unique user experience.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

Udacity Robotics video series: Interview with Jillian Ogle from Let’s Robot


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Jillian Ogle, Founder and CEO of Let’s Robot. Jillian is a video game designer turned roboticist attempting to combine games in robotics in a unique user experience.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

EU’s future cyber-farms to utilise drones, robots and sensors

Farmers could protect the environment and cut down on fertiliser use with swarms of drones. Image credit – ‘Aerial View – Landschaft Markgräflerland’ by Taxiarchos228 is licenced under CC 3.0 unported
by Anthony King

Bee-based maths is helping teach swarms of drones to find weeds, while robotic mowers keep hedgerows in shape.

‘We observe the behaviour of bees. We gain knowledge of how the bees solve problems and with this we obtain rules of interaction that can be adapted to tell us how the robot swarms should work together,’ said Vito Trianni at the Institute of Cognitive Sciences and Technologies of the Italian National Research Council.

Honeybees, for example, run on an algorithm to allow them to choose the best nest site, even though no bee knows the full picture.

Trianni runs an EU-funded research project known as SAGA, which is using the power of robotic groupthink to keep crops weed free.

‘We can use low-cost robots and low-cost cameras. They can even be prone to error, but thanks to the cooperation they will be able to generate precise maps at centimetre scales,’ said Trianni.

‘They will initially spread over the field to inspect it at low resolution, but will then decide on areas that require more focus,’ said Trianni. ‘They can gather together in small groups closer to the ground.’

Importantly the drones make these decisions themselves, as a group.

Next spring, a swarm of the quadcopters will be released over a sugar beet field. They will stay in radio contact with each other and use algorithms learnt from the bees to cooperate and put together a map of weeds. This will then allow for targeted spraying of weeds or their mechanical removal on organic farms.

Today the most common way to control weeds is to spray entire fields with herbicide chemicals. Smarter spraying will save farmers money, but it will also lower the risk of resistance developing to the agrichemicals. And there will be an environmental benefit from spraying less herbicides.

Co-ops

Swarms of drones for mapping crop fields offer a service to farmers, while farm co-ops could even buy swarms themselves.

‘There is no need to fly them every day over your field, so it is possible to share the technology between multiple farmers,’ said Trianni. A co-op might buy 20 to 30 drones, but adjust the size of the swarm to the farm.

The drones are 1.5 kilos in weight and fly for around 20-30 minutes. For large fields, the drone swarms could operate in relay teams, with drones landing and being replaced by others.

It’s the kind of technology that is ideally suited to today’s large-scale farms, as is another remote technology that combines on-the-ground sensor information with satellite data to tell farmers how much nitrogen or water their fields need.

Wheat harvested from a field in Boigneville, 100 km south of Paris, France, in August this year will have been grown with the benefit of this data, as part of pilot being run by an EU-funded project known as IOF2020, which involves over 70 partners and around 200 researchers.

‘Sensors are costing less and less, so at the end of the project we hope to have something farmers or farm cooperatives can deploy in their fields,’ explained Florence Leprince, a plant scientist at Arvalis – Institut du végétal, the French arable farming institute which is running the wheat experiment.

‘This will allow farmers be more precise and not overuse nitrogen or water.’ Florence Leprince, Arvalis – Institut du végétal, France

Adding too much nitrogen to a crop field costs farmers money, but it also has a negative environmental impact. Surplus nitrogen leaches from soils and into rivers and lakes, causing pollution.

It’s needed because satellite pictures can indicate how much nitrogen is in a crop, but not in soil. The sensors will help add detail, though in a way that farmers will find easy to use.

It’s a similar story for the robotic hedge trimmer being developed by a separate group of researchers. All the farmer or groundskeeper needs to do is mark which hedge needs trimming.

‘The user will sketch the garden, though not too accurately,’ said Bob Fisher, computer vision scientist at Edinburgh University, UK, and coordinator of the EU-funded TrimBot2020 project. ‘The robot will go into the garden and come back with a tidied-up sketch map. At that point, the user can say go trim that hedge, or mark what’s needed on the map.’

This autumn will see the arm and the robot base assembled together, while the self-driving bot will be set off around the garden next spring.

More info:
SAGA (part of ECHORD Plus Plus)
IOF2020
TrimBot2020

EU’s future cyber-farms to utilise drones, robots and sensors

Farmers could protect the environment and cut down on fertiliser use with swarms of drones. Image credit – ‘Aerial View – Landschaft Markgräflerland’ by Taxiarchos228 is licenced under CC 3.0 unported
by Anthony King

Bee-based maths is helping teach swarms of drones to find weeds, while robotic mowers keep hedgerows in shape.

‘We observe the behaviour of bees. We gain knowledge of how the bees solve problems and with this we obtain rules of interaction that can be adapted to tell us how the robot swarms should work together,’ said Vito Trianni at the Institute of Cognitive Sciences and Technologies of the Italian National Research Council.

Honeybees, for example, run on an algorithm to allow them to choose the best nest site, even though no bee knows the full picture.

Trianni runs an EU-funded research project known as SAGA, which is using the power of robotic groupthink to keep crops weed free.

‘We can use low-cost robots and low-cost cameras. They can even be prone to error, but thanks to the cooperation they will be able to generate precise maps at centimetre scales,’ said Trianni.

‘They will initially spread over the field to inspect it at low resolution, but will then decide on areas that require more focus,’ said Trianni. ‘They can gather together in small groups closer to the ground.’

Importantly the drones make these decisions themselves, as a group.

Next spring, a swarm of the quadcopters will be released over a sugar beet field. They will stay in radio contact with each other and use algorithms learnt from the bees to cooperate and put together a map of weeds. This will then allow for targeted spraying of weeds or their mechanical removal on organic farms.

Today the most common way to control weeds is to spray entire fields with herbicide chemicals. Smarter spraying will save farmers money, but it will also lower the risk of resistance developing to the agrichemicals. And there will be an environmental benefit from spraying less herbicides.

Co-ops

Swarms of drones for mapping crop fields offer a service to farmers, while farm co-ops could even buy swarms themselves.

‘There is no need to fly them every day over your field, so it is possible to share the technology between multiple farmers,’ said Trianni. A co-op might buy 20 to 30 drones, but adjust the size of the swarm to the farm.

The drones are 1.5 kilos in weight and fly for around 20-30 minutes. For large fields, the drone swarms could operate in relay teams, with drones landing and being replaced by others.

It’s the kind of technology that is ideally suited to today’s large-scale farms, as is another remote technology that combines on-the-ground sensor information with satellite data to tell farmers how much nitrogen or water their fields need.

Wheat harvested from a field in Boigneville, 100 km south of Paris, France, in August this year will have been grown with the benefit of this data, as part of pilot being run by an EU-funded project known as IOF2020, which involves over 70 partners and around 200 researchers.

‘Sensors are costing less and less, so at the end of the project we hope to have something farmers or farm cooperatives can deploy in their fields,’ explained Florence Leprince, a plant scientist at Arvalis – Institut du végétal, the French arable farming institute which is running the wheat experiment.

‘This will allow farmers be more precise and not overuse nitrogen or water.’ Florence Leprince, Arvalis – Institut du végétal, France

Adding too much nitrogen to a crop field costs farmers money, but it also has a negative environmental impact. Surplus nitrogen leaches from soils and into rivers and lakes, causing pollution.

It’s needed because satellite pictures can indicate how much nitrogen is in a crop, but not in soil. The sensors will help add detail, though in a way that farmers will find easy to use.

It’s a similar story for the robotic hedge trimmer being developed by a separate group of researchers. All the farmer or groundskeeper needs to do is mark which hedge needs trimming.

‘The user will sketch the garden, though not too accurately,’ said Bob Fisher, computer vision scientist at Edinburgh University, UK, and coordinator of the EU-funded TrimBot2020 project. ‘The robot will go into the garden and come back with a tidied-up sketch map. At that point, the user can say go trim that hedge, or mark what’s needed on the map.’

This autumn will see the arm and the robot base assembled together, while the self-driving bot will be set off around the garden next spring.

More info:
SAGA (part of ECHORD Plus Plus)
IOF2020
TrimBot2020

Talking Machines: Data Science Africa, with Dina Machuve

In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology. We cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the Data Science Africa conference and workshop.

If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Talking Machines: Data Science Africa, with Dina Machuve

In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology. We cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the Data Science Africa conference and workshop.

If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Custom robots in a matter of minutes

Interactive Robogami enables the fabrication of a wide range of robot designs. Photo: MIT CSAIL

Even as robots become increasingly common, they remain incredibly difficult to make. From designing and modeling to fabricating and testing, the process is slow and costly: Even one small change can mean days or weeks of rethinking and revising important hardware.

But what if there were a way to let non-experts craft different robotic designs — in one sitting?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are getting closer to doing exactly that. In a new paper, they present a system called “Interactive Robogami” that lets you design a robot in minutes, and then 3-D print and assemble it in as little as four hours.
 

One of the key features of the system is that it allows designers to determine both the robot’s movement (“gait”) and shape (“geometry”), a capability that’s often separated in design systems.

“Designing robots usually requires expertise that only mechanical engineers and roboticists have,” says PhD student and co-lead author Adriana Schulz. “What’s exciting here is that we’ve created a tool that allows a casual user to design their own robot by giving them this expert knowledge.”

The paper, which is being published in the new issue of the International Journal of Robotics Research, was co-led by PhD graduate Cynthia Sung alongside MIT professors Wojciech Matusik and Daniela Rus.

The other co-authors include PhD student Andrew Spielberg, former master’s student Wei Zhao, former undergraduate Robin Cheng, and Columbia University professor Eitan Grinspun. (Sung is now an assistant professor at the University of Pennsylvania.)

How it works

3-D printing has transformed the way that people can turn ideas into real objects, allowing users to move away from more traditional manufacturing. Despite these developments, current design tools still have space and motion limitations, and there’s a steep learning curve to understanding the various nuances.

Interactive Robogami aims to be much more intuitive. It uses simulations and interactive feedback with algorithms for design composition, allowing users to focus on high-level conceptual design. Users can choose from a library of over 50 different bodies, wheels, legs, and “peripherals,” as well as a selection of different steps (“gaits”).

Importantly, the system is able to guarantee that a design is actually possible, analyzing factors such as speed and stability to make suggestions and ensure that, for example, the user doesn’t create a robot so top-heavy that it can’t move without tipping over.

Once designed, the robot is then fabricated. The team’s origami-inspired “3-D print and fold” technique involves printing the design as flat faces connected at joints, and then folding the design into the final shape, combining the most effective parts of 2-D and 3-D printing.  

“3-D printing lets you print complex, rigid structures, while 2-D fabrication gives you lightweight but strong structures that can be produced quickly,” Sung says. “By 3-D printing 2-D patterns, we can leverage these advantages to develop strong, complex designs with lightweight materials.”

Results

To test the system, the team used eight subjects who were given 20 minutes of training and asked to perform two tasks.

One task involved creating a mobile, stable car design in just 10 minutes. In a second task, users were given a robot design and asked to create a trajectory to navigate the robot through an obstacle course in the least amount of travel time.

The team fabricated a total of six robots, each of which took 10 to 15 minutes to design, three to seven hours to print and 30 to 90 minutes to assemble. The team found that their 3-D print-and-fold method reduced printing time by 73 percent and the amount of material used by 70 percent. The robots also demonstrated a wide range of movement, like using single legs to walk, using different step sequences, and using legs and wheels simultaneously.

“You can quickly design a robot that you can print out, and that will help you do these tasks very quickly, easily, and cheaply,” says Sung. “It’s lowering the barrier to have everyone design and create their own robots.”

Rus hopes people will be able to incorporate robots to help with everyday tasks, and that similar systems with rapid printing technologies will enable large-scale customization and production of robots.

“These tools enable new approaches to teaching computational thinking and creating,” says Rus. “Students can not only learn by coding and making their own robots, but by bringing to life conceptual ideas about what their robots can actually do.”

While the current version focuses on designs that can walk, the team hopes that in the future, the robots can take flight. Another goal is to have the user be able to go into the system and define the behavior of the robot in terms of tasks it can perform.

“This tool enables rapid exploration of dynamic robots at an early stage in the design process,” says Moritz Bächer, a research scientist and head of the computational design and manufacturing group at Disney Research. “The expert defines the building blocks, with constraints and composition rules, and paves the way for non-experts to make complex robotic systems. This system will likely inspire follow-up work targeting the computational design of even more intricate robots.”

This research was supported by the National Science Foundation’s Expeditions in Computing program.

Robotbenchmark lets you program simulated robots from your browser

Cyberbotics Ltd. is launching https://robotbenchmark.net to allow everyone to program simulated robots online for free.

Robotbenchmark offers a series of robot programming challenges that address various topics across a wide range of difficulty levels, from middle school to PhD. Users don’t need to install any software on their computer, cloud-based 3D robotics simulations run on a web page. They can learn programming by writing Python code to control robot behavior. The performance achieved by users is recorded and displayed online, so that they can challenge their friends and show off their skills at robot programming on social networks. Everything is designed to be extremely easy-to-use, runs on any computer, any web browser, and is totally free of charge.

This project is funded by Cyberbotics Ltd. and the Human Brain Project.

About Cyberbotics Ltd.: Cyberbotics is a Swiss-based company, spin-off from the École Polytechnique Fédérale de Lausanne, specialized in the development of robotics simulation software. It has been developing and selling the Webots software for more than 19 years. Webots is a reference software in robotics simulation being used in more than 1200 companies and universities across the world. Cyberbotics is also involved in industrial and research projects, such as the Human Brain Project.

About the Human Brain Project: The Human Brain Project is a large ten-year scientific research project that aims to build a collaborative ICT-based scientific research infrastructure to allow researchers across the globe to advance knowledge in the fields of neuroscience, computing, neurorobotics, and brain-related medicine. The Project, which started on 1 October 2013, is a European Commission Future and Emerging Technologies Flagship. Based in Geneva, Switzerland, it is coordinated by the École Polytechnique Fédérale de Lausanne and is largely funded by the European Union.

The Drone Center’s Weekly Roundup: 8/21/17

A U.S. Marine tests an Instant Eye drone during exercises on August 18 in Virginia. Credit: Lance Cpl. Michaela R. Gregory

August 14, 2017 – August 20, 2017

News

During a nighttime flight in the Persian Gulf, an Iranian surveillance drone followed a U.S. aircraft carrier and came within 300 feet of a U.S. fighter jet. It was the second time in a week that an Iranian drone interfered with U.S. Navy operations in the Gulf. In a statement, Iran’s Revolutionary Guard said that its drones were operated “accurately and professionally.” (Associated Press)

Commentary, Analysis, and Art

A report by the RAND Corporation argues that distributed, localized drone hubs can reduce energy consumption for drone delivery programs in urban centers. (StateScoop)

At the Economist, Tom Standage writes that toys like hobby drones can “sometimes give birth to important technologies.”

At the Atlantic, Naomi Nix looks at how the Kentucky Valley Educational Cooperative is investing in programs that teach students how to build and operate drones.

At Aviation Week, David Hambling examines the growing demand for small, pocket-sized military drones.

At Slate, Faine Greenwood argues that the U.S. military should not use consumer drones.

At the Los Angeles Times, W.J. Hennigan reports that U.S. drones are performing danger-close strikes in support of the Syrian Democratic Forces.

At TechCrunch, Jon Hegranes separates the “fiction from feasibility” of drone deliveries.

At DefenseNews, Adam Stone looks at how the U.S. Navy is investing in a command-and-control system with an eye to someday operating drones from carriers.  

At the Modern War Institute, Dan Maurer considers whether military ethics and codes should apply to robot soldiers.

At the San Francisco Chronicle, Benny Evangelista considers recent reports of close encounters between drones and manned aircraft.

At War is Boring, Robert Beckhusen writes that the Israeli military is investigating allegations that an Israeli drone manufacturer carried out a drone strike against Armenian soldiers as part of a product demonstration.

At Popular Mechanics, David Hambling considers whether an armed quadrotor drone is ethical or even practical.

At Washington Technology, Ross Wilkers looks into Boeing’s push to become a leader in the field of autonomous weapons systems.

Know Your Drone

Israeli firm Meteor Aerospace is developing a medium-altitude long-endurance surveillance and reconnaissance drone. (FlightGlobal)

Following the U.S. Army’s decision to discontinue use of its products, Chinese drone maker DJI is speeding the development of a security system that allows users to disconnect drones from DJI’s servers while in flight. (The New York Times)

Huntington Ingalls Industries demonstrated its Proteus dual-mode unmanned undersea vehicle at an exercise held by the U.S. Naval Surface Warfare Center. (Jane’s)

A team from the University of Sherbrooke is developing a fixed-wing drone that uses thrust to achieve perched landings. (IEEE Spectrum)

The U.S. Naval Research Laboratory is developing a fuel cell-powered drone called Hybrid Tiger that could have an endurance of up to three days. (Jane’s)

China’s Beijing Sifang Automation is developing an autonomous unmanned boat called SeaFly, which it hopes will be ready for production by the end of the year. (Jane’s)

The U.S. Defense Advanced Research Projects Agency unveiled the Assured Autonomy program, which seeks to build better trustworthiness into a range of military unmanned systems. (Shephard Media)

Taiwan’s National Chung-Shan Institute of Science and Technology unveiled an anti-radiation loitering munition drone. (Shephard Media)

Amazon has patented a retractable tube that can be used to funnel packages from delivery drones to the ground. (Puget Sound Business Journal)

Meanwhile, Wal-Mart has been awarded a patent for a floating warehouse that could be used to carry goods for drone deliveries. (CNBC)

Telecommunications company AT&T is looking to develop autonomous systems to make drones more efficient for cell tower inspections. (Unmanned Aerial Online)  

Drones at Work

The U.S. Forest Service used a drone to to collect data over the Minerva Fire in the Plumas National Forest. (Unmanned Aerial Online)

A team of researchers from Oklahoma State University and the University of Nebraska are planning to use drones to study atmospheric conditions during the upcoming solar eclipse. (Popular Science)

In a test, U.S. drone maker General Atomics flew its new Grey Eagle Extended Range drone for 42 hours. (Jane’s)

In a U.S. Navy exercise, an MQ-8B Fire Scout helicopter drone was handed off between control stations while in flight. (Shephard Media)

A U.S. MQ-1 Predator drone crashed shortly after taking off from Incirlik Air Base in Turkey. (Military.com)

The Michigan Department of Corrections announced that three people have been arrested after attempting to use a drone to smuggle drugs and a cellphone into a prison in the city of Ionia. (New York Post)

Meanwhile, Border Patrol agents in San Diego, California arrested a man for allegedly flying a drone laden with drugs over the U.S.-Mexico border. (The San Diego Tribune)

A medevac helicopter responding to a fatal car crash in Michigan had to abort its first landing attempt at the scene because a drone was spotted flying over the area. (MLive)

NASA plans to once again use its Global Hawk high-altitude drone to study severe storms over the Pacific this hurricane season. (International Business Times)

Police in Glynn County, Georgia used a drone to search for a suspect fleeing in a marshy area. (Florida Times-Union)

The Regina Police Service Traffic Unit in Canada is acquiring drones to collect data over collision scenes. (Global News)

A photo essay at the National Review examines quadrotor drones at work in both civilian and military spheres.

Industry Intel

DefenseNews reports that General Atomics is hoping to sell around 90 Avenger drones, the successor to the Reaper, to an unnamed international customer.

The Ohio Federal Research Network is behind a $7 million initiative to make Ohio a center for drone research. (Dayton Daily News)

The U.K.’s Defense Science and Technology Laboratory awarded Qinetiq a $5.8 million contract to lead the Maritime Autonomous Platform Exploitation project. (Shephard Media)

Insitu has partnered with FireWhat and Esri to provide firefighters with improved aerial intelligence. (Shephard Media)

3DR, Global Aerospace, and Harpenau Insurance have partnered to offer businesses drone insurance that covers legal liability and physical damage. (TechRepublic)

Aerialtronics, a Dutch industrial drone maker, announced that it has applied for a solvency procedure and will seek new investors. (Press Release)

The U.S. Navy awarded Insitu a $7.5 million foreign military sales contract for six ScanEagle drones for the Philippines. (DoD)

 The U.S. Navy awarded Insitu a $319,886 contract for the procurement of Strongback Module Assemblies.

The U.S. Air Force awarded Area I a $5 million contract for the development of air-launched drones. (FBO)

The U.S. Army awarded Gird Systems a $148,364 contract for squad-level counter-drone technology. (FBO)

The U.S. Army awarded Airspace Systems a $1.9 million contract for autonomous drone defense. (FBO)

The U.S. Army awarded RPX Technologies a $147,905 contract for a micro IR thermal imaging camera for nano-UAVs. (FBO)

The U.S. Department of Interior awarded Brocktek a $65,000 contract for 3DR Solo drones. (FBO)  

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Using machine learning to improve patient care

Credit: Shutterstock / MIT

Doctors are often deluged by signals from charts, test results, and other metrics to keep track of. It can be difficult to integrate and monitor all of these data for multiple patients while making real-time treatment decisions, especially when data is documented inconsistently across hospitals.

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) explore ways for computers to help doctors make better medical decisions.

One team created a machine-learning approach called “ICU Intervene” that takes large amounts of intensive-care-unit (ICU) data, from vitals and labs to notes and demographics, to determine what kinds of treatments are needed for different symptoms. The system uses “deep learning” to make real-time predictions, learning from past ICU cases to make suggestions for critical care, while also explaining the reasoning behind these decisions.

“The system could potentially be an aid for doctors in the ICU, which is a high-stress, high-demand environment,” says PhD student Harini Suresh, lead author on the paper about ICU Intervene. “The goal is to leverage data from medical records to improve health care and predict actionable interventions.”

Another team developed an approach called “EHR Model Transfer” that can facilitate the application of predictive models on an electronic health record (EHR) system, despite being trained on data from a different EHR system. Specifically, using this approach the team showed that predictive models for mortality and prolonged length of stay can be trained on one EHR system and used to make predictions in another.

ICU Intervene was co-developed by Suresh, undergraduate student Nathan Hunt, postdoc Alistair Johnson, researcher Leo Anthony Celi, MIT Professor Peter Szolovits, and PhD student Marzyeh Ghassemi. It was presented this month at the Machine Learning for Healthcare Conference in Boston.

EHR Model Transfer was co-developed by lead authors Jen Gong and Tristan Naumann, both PhD students at CSAIL, as well as Szolovits and John Guttag, who is the Dugald C. Jackson Professor in Electrical Engineering. It was presented at the ACM’s Special Interest Group on Knowledge Discovery and Data Mining in Halifax, Canada.

Both models were trained using data from the critical care database MIMIC, which includes de-identified data from roughly 40,000 critical care patients and was developed by the MIT Lab for Computational Physiology.

ICU Intervene

Integrated ICU data is vital to automating the process of predicting patients’ health outcomes.

“Much of the previous work in clinical decision-making has focused on outcomes such as mortality (likelihood of death), while this work predicts actionable treatments,” Suresh says. “In addition, the system is able to use a single model to predict many outcomes.”

ICU Intervene focuses on hourly prediction of five different interventions that cover a wide variety of critical care needs, such as breathing assistance, improving cardiovascular function, lowering blood pressure, and fluid therapy.

At each hour, the system extracts values from the data that represent vital signs, as well as clinical notes and other data points. All of the data are represented with values that indicate how far off a patient is from the average (to then evaluate further treatment).

Importantly, ICU Intervene can make predictions far into the future. For example, the model can predict whether a patient will need a ventilator six hours later rather than just 30 minutes or an hour later. The team also focused on providing reasoning for the model’s predictions, giving physicians more insight.

“Deep neural-network-based predictive models in medicine are often criticized for their black-box nature,” says Nigam Shah, an associate professor of medicine at Stanford University who was not involved in the paper. “However, these authors predict the start and end of medical interventions with high accuracy, and are able to demonstrate interpretability for the predictions they make.”

The team found that the system outperformed previous work in predicting interventions, and was especially good at predicting the need for vasopressors, a medication that tightens blood vessels and raises blood pressure.

In the future, the researchers will be trying to improve ICU Intervene to be able to give more individualized care and provide more advanced reasoning for decisions, such as why one patient might be able to taper off steroids, or why another might need a procedure like an endoscopy.

EHR Model Transfer

Another important consideration for leveraging ICU data is how it’s stored and what happens when that storage method gets changed. Existing machine-learning models need data to be encoded in a consistent way, so the fact that hospitals often change their EHR systems can create major problems for data analysis and prediction.

That’s where EHR Model Transfer comes in. The approach works across different versions of EHR platforms, using natural language processing to identify clinical concepts that are encoded differently across systems and then mapping them to a common set of clinical concepts (such as “blood pressure” and “heart rate”).

For example, a patient in one EHR platform could be switching hospitals and would need their data transferred to a different type of platform. EHR Model Transfer aims to ensure that the model could still predict aspects of that patient’s ICU visit, such as their likelihood of a prolonged stay or even of dying in the unit.

“Machine-learning models in health care often suffer from low external validity, and poor portability across sites,” says Shah. “The authors devise a nifty strategy for using prior knowledge in medical ontologies to derive a shared representation across two sites that allows models trained at one site to perform well at another site. I am excited to see such creative use of codified medical knowledge in improving portability of predictive models.”

With EHR Model Transfer, the team tested their model’s ability to predict two outcomes: mortality and the need for a prolonged stay. They trained it on one EHR platform and then tested its predictions on a different platform. EHR Model Transfer was found to outperform baseline approaches and demonstrated better transfer of predictive models across EHR versions compared to using EHR-specific events alone.

In the future, the EHR Model Transfer team plans to evaluate the system on data and EHR systems from other hospitals and care settings.

Both papers were supported, in part, by the Intel Science and Technology Center for Big Data and the National Library of Medicine. The paper detailing EHR Model Transfer was additionally supported by the National Science Foundation and Quanta Computer, Inc.

Page 1 of 3
1 2 3