Page 385 of 399
1 383 384 385 386 387 399

Long-term control of brain-computer interfaces by users with locked-in syndrome

Using Brain Computer Interfaces (BCI) as a way to give people with locked-in syndrome back reliable communication and control capabilities has long been a futuristic trope of medical dramas and sci-fi. A team from NCCR Robotics and CNBI, EPFL have recently published a paper detailing work as a step towards taking this technique into everyday lives of those affected by extreme paralysis.

BCIs measure brainwaves using sensors placed outside of the head. With careful training and calibration, these brainwaves can be used to understand the intention of the person they are recorded from. However, one of the challenges of using BCIs in everyday life is the variation in the BCI performance over time. This issue is particularly important for motor-restricted end-users, as they usually suffer from even higher fluctuations of their brain signals and resulting performance. One approach to tackle this issue is to use shared control approaches for BCI, which has so far been mostly based on predefined settings, providing a fixed level of assistance to the user.

The team tackled the issue of performance variation by developing a system capable of dynamically matching the user’s evolving capabilities with the appropriate level of assistance. The key element of this adaptive shared control framework is to incorporate the user’s brain state and signal reliability while the user is trying to deliver a BCI command.

The team tested their novel strategy with one person with incomplete locked-in syndrome, multiple times over the course of a year. The person was asked to imagine moving the right hand to trigger a “right command”, and the left hand for a “left command” to control an avatar in a computer game. They demonstrated how adaptive shared control can exploit an estimation of the BCI performance (in terms of command delivery time) to adjust online the level of assistance in a BCI game by regulating its speed. Remarkably, the results exhibited a stable performance over several months without recalibration of the BCI classifier or the performance estimator.

This work marks the first time that this design has been successfully tested with an end-user with incomplete locked-in syndrome and successfully replicates the results of earlier tests with able bodied subjects.

 

Reference:

S. Saeedi, R. Chavarriage and J. del R. Millán, “Long-Term Stable Control of Motor-Imagery BCI by a Locked-In User Through Adaptive Assistance,” IEEE Transactions on neural systems and rehabilitation engineering,” Vol. 25, no. 4, 380-391.

Udacity Robotics video series: Interview with Jillian Ogle from Let’s Robot


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Jillian Ogle, Founder and CEO of Let’s Robot. Jillian is a video game designer turned roboticist attempting to combine games in robotics in a unique user experience.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

Udacity Robotics video series: Interview with Jillian Ogle from Let’s Robot


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Jillian Ogle, Founder and CEO of Let’s Robot. Jillian is a video game designer turned roboticist attempting to combine games in robotics in a unique user experience.

You can find all the interviews here. We’ll be posting them regularly on Robohub.

EU’s future cyber-farms to utilise drones, robots and sensors

Farmers could protect the environment and cut down on fertiliser use with swarms of drones. Image credit – ‘Aerial View – Landschaft Markgräflerland’ by Taxiarchos228 is licenced under CC 3.0 unported
by Anthony King

Bee-based maths is helping teach swarms of drones to find weeds, while robotic mowers keep hedgerows in shape.

‘We observe the behaviour of bees. We gain knowledge of how the bees solve problems and with this we obtain rules of interaction that can be adapted to tell us how the robot swarms should work together,’ said Vito Trianni at the Institute of Cognitive Sciences and Technologies of the Italian National Research Council.

Honeybees, for example, run on an algorithm to allow them to choose the best nest site, even though no bee knows the full picture.

Trianni runs an EU-funded research project known as SAGA, which is using the power of robotic groupthink to keep crops weed free.

‘We can use low-cost robots and low-cost cameras. They can even be prone to error, but thanks to the cooperation they will be able to generate precise maps at centimetre scales,’ said Trianni.

‘They will initially spread over the field to inspect it at low resolution, but will then decide on areas that require more focus,’ said Trianni. ‘They can gather together in small groups closer to the ground.’

Importantly the drones make these decisions themselves, as a group.

Next spring, a swarm of the quadcopters will be released over a sugar beet field. They will stay in radio contact with each other and use algorithms learnt from the bees to cooperate and put together a map of weeds. This will then allow for targeted spraying of weeds or their mechanical removal on organic farms.

Today the most common way to control weeds is to spray entire fields with herbicide chemicals. Smarter spraying will save farmers money, but it will also lower the risk of resistance developing to the agrichemicals. And there will be an environmental benefit from spraying less herbicides.

Co-ops

Swarms of drones for mapping crop fields offer a service to farmers, while farm co-ops could even buy swarms themselves.

‘There is no need to fly them every day over your field, so it is possible to share the technology between multiple farmers,’ said Trianni. A co-op might buy 20 to 30 drones, but adjust the size of the swarm to the farm.

The drones are 1.5 kilos in weight and fly for around 20-30 minutes. For large fields, the drone swarms could operate in relay teams, with drones landing and being replaced by others.

It’s the kind of technology that is ideally suited to today’s large-scale farms, as is another remote technology that combines on-the-ground sensor information with satellite data to tell farmers how much nitrogen or water their fields need.

Wheat harvested from a field in Boigneville, 100 km south of Paris, France, in August this year will have been grown with the benefit of this data, as part of pilot being run by an EU-funded project known as IOF2020, which involves over 70 partners and around 200 researchers.

‘Sensors are costing less and less, so at the end of the project we hope to have something farmers or farm cooperatives can deploy in their fields,’ explained Florence Leprince, a plant scientist at Arvalis – Institut du végétal, the French arable farming institute which is running the wheat experiment.

‘This will allow farmers be more precise and not overuse nitrogen or water.’ Florence Leprince, Arvalis – Institut du végétal, France

Adding too much nitrogen to a crop field costs farmers money, but it also has a negative environmental impact. Surplus nitrogen leaches from soils and into rivers and lakes, causing pollution.

It’s needed because satellite pictures can indicate how much nitrogen is in a crop, but not in soil. The sensors will help add detail, though in a way that farmers will find easy to use.

It’s a similar story for the robotic hedge trimmer being developed by a separate group of researchers. All the farmer or groundskeeper needs to do is mark which hedge needs trimming.

‘The user will sketch the garden, though not too accurately,’ said Bob Fisher, computer vision scientist at Edinburgh University, UK, and coordinator of the EU-funded TrimBot2020 project. ‘The robot will go into the garden and come back with a tidied-up sketch map. At that point, the user can say go trim that hedge, or mark what’s needed on the map.’

This autumn will see the arm and the robot base assembled together, while the self-driving bot will be set off around the garden next spring.

More info:
SAGA (part of ECHORD Plus Plus)
IOF2020
TrimBot2020

EU’s future cyber-farms to utilise drones, robots and sensors

Farmers could protect the environment and cut down on fertiliser use with swarms of drones. Image credit – ‘Aerial View – Landschaft Markgräflerland’ by Taxiarchos228 is licenced under CC 3.0 unported
by Anthony King

Bee-based maths is helping teach swarms of drones to find weeds, while robotic mowers keep hedgerows in shape.

‘We observe the behaviour of bees. We gain knowledge of how the bees solve problems and with this we obtain rules of interaction that can be adapted to tell us how the robot swarms should work together,’ said Vito Trianni at the Institute of Cognitive Sciences and Technologies of the Italian National Research Council.

Honeybees, for example, run on an algorithm to allow them to choose the best nest site, even though no bee knows the full picture.

Trianni runs an EU-funded research project known as SAGA, which is using the power of robotic groupthink to keep crops weed free.

‘We can use low-cost robots and low-cost cameras. They can even be prone to error, but thanks to the cooperation they will be able to generate precise maps at centimetre scales,’ said Trianni.

‘They will initially spread over the field to inspect it at low resolution, but will then decide on areas that require more focus,’ said Trianni. ‘They can gather together in small groups closer to the ground.’

Importantly the drones make these decisions themselves, as a group.

Next spring, a swarm of the quadcopters will be released over a sugar beet field. They will stay in radio contact with each other and use algorithms learnt from the bees to cooperate and put together a map of weeds. This will then allow for targeted spraying of weeds or their mechanical removal on organic farms.

Today the most common way to control weeds is to spray entire fields with herbicide chemicals. Smarter spraying will save farmers money, but it will also lower the risk of resistance developing to the agrichemicals. And there will be an environmental benefit from spraying less herbicides.

Co-ops

Swarms of drones for mapping crop fields offer a service to farmers, while farm co-ops could even buy swarms themselves.

‘There is no need to fly them every day over your field, so it is possible to share the technology between multiple farmers,’ said Trianni. A co-op might buy 20 to 30 drones, but adjust the size of the swarm to the farm.

The drones are 1.5 kilos in weight and fly for around 20-30 minutes. For large fields, the drone swarms could operate in relay teams, with drones landing and being replaced by others.

It’s the kind of technology that is ideally suited to today’s large-scale farms, as is another remote technology that combines on-the-ground sensor information with satellite data to tell farmers how much nitrogen or water their fields need.

Wheat harvested from a field in Boigneville, 100 km south of Paris, France, in August this year will have been grown with the benefit of this data, as part of pilot being run by an EU-funded project known as IOF2020, which involves over 70 partners and around 200 researchers.

‘Sensors are costing less and less, so at the end of the project we hope to have something farmers or farm cooperatives can deploy in their fields,’ explained Florence Leprince, a plant scientist at Arvalis – Institut du végétal, the French arable farming institute which is running the wheat experiment.

‘This will allow farmers be more precise and not overuse nitrogen or water.’ Florence Leprince, Arvalis – Institut du végétal, France

Adding too much nitrogen to a crop field costs farmers money, but it also has a negative environmental impact. Surplus nitrogen leaches from soils and into rivers and lakes, causing pollution.

It’s needed because satellite pictures can indicate how much nitrogen is in a crop, but not in soil. The sensors will help add detail, though in a way that farmers will find easy to use.

It’s a similar story for the robotic hedge trimmer being developed by a separate group of researchers. All the farmer or groundskeeper needs to do is mark which hedge needs trimming.

‘The user will sketch the garden, though not too accurately,’ said Bob Fisher, computer vision scientist at Edinburgh University, UK, and coordinator of the EU-funded TrimBot2020 project. ‘The robot will go into the garden and come back with a tidied-up sketch map. At that point, the user can say go trim that hedge, or mark what’s needed on the map.’

This autumn will see the arm and the robot base assembled together, while the self-driving bot will be set off around the garden next spring.

More info:
SAGA (part of ECHORD Plus Plus)
IOF2020
TrimBot2020

Talking Machines: Data Science Africa, with Dina Machuve

In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology. We cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the Data Science Africa conference and workshop.

If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Talking Machines: Data Science Africa, with Dina Machuve

In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology. We cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the Data Science Africa conference and workshop.

If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Custom robots in a matter of minutes

Interactive Robogami enables the fabrication of a wide range of robot designs. Photo: MIT CSAIL

Even as robots become increasingly common, they remain incredibly difficult to make. From designing and modeling to fabricating and testing, the process is slow and costly: Even one small change can mean days or weeks of rethinking and revising important hardware.

But what if there were a way to let non-experts craft different robotic designs — in one sitting?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are getting closer to doing exactly that. In a new paper, they present a system called “Interactive Robogami” that lets you design a robot in minutes, and then 3-D print and assemble it in as little as four hours.
 

One of the key features of the system is that it allows designers to determine both the robot’s movement (“gait”) and shape (“geometry”), a capability that’s often separated in design systems.

“Designing robots usually requires expertise that only mechanical engineers and roboticists have,” says PhD student and co-lead author Adriana Schulz. “What’s exciting here is that we’ve created a tool that allows a casual user to design their own robot by giving them this expert knowledge.”

The paper, which is being published in the new issue of the International Journal of Robotics Research, was co-led by PhD graduate Cynthia Sung alongside MIT professors Wojciech Matusik and Daniela Rus.

The other co-authors include PhD student Andrew Spielberg, former master’s student Wei Zhao, former undergraduate Robin Cheng, and Columbia University professor Eitan Grinspun. (Sung is now an assistant professor at the University of Pennsylvania.)

How it works

3-D printing has transformed the way that people can turn ideas into real objects, allowing users to move away from more traditional manufacturing. Despite these developments, current design tools still have space and motion limitations, and there’s a steep learning curve to understanding the various nuances.

Interactive Robogami aims to be much more intuitive. It uses simulations and interactive feedback with algorithms for design composition, allowing users to focus on high-level conceptual design. Users can choose from a library of over 50 different bodies, wheels, legs, and “peripherals,” as well as a selection of different steps (“gaits”).

Importantly, the system is able to guarantee that a design is actually possible, analyzing factors such as speed and stability to make suggestions and ensure that, for example, the user doesn’t create a robot so top-heavy that it can’t move without tipping over.

Once designed, the robot is then fabricated. The team’s origami-inspired “3-D print and fold” technique involves printing the design as flat faces connected at joints, and then folding the design into the final shape, combining the most effective parts of 2-D and 3-D printing.  

“3-D printing lets you print complex, rigid structures, while 2-D fabrication gives you lightweight but strong structures that can be produced quickly,” Sung says. “By 3-D printing 2-D patterns, we can leverage these advantages to develop strong, complex designs with lightweight materials.”

Results

To test the system, the team used eight subjects who were given 20 minutes of training and asked to perform two tasks.

One task involved creating a mobile, stable car design in just 10 minutes. In a second task, users were given a robot design and asked to create a trajectory to navigate the robot through an obstacle course in the least amount of travel time.

The team fabricated a total of six robots, each of which took 10 to 15 minutes to design, three to seven hours to print and 30 to 90 minutes to assemble. The team found that their 3-D print-and-fold method reduced printing time by 73 percent and the amount of material used by 70 percent. The robots also demonstrated a wide range of movement, like using single legs to walk, using different step sequences, and using legs and wheels simultaneously.

“You can quickly design a robot that you can print out, and that will help you do these tasks very quickly, easily, and cheaply,” says Sung. “It’s lowering the barrier to have everyone design and create their own robots.”

Rus hopes people will be able to incorporate robots to help with everyday tasks, and that similar systems with rapid printing technologies will enable large-scale customization and production of robots.

“These tools enable new approaches to teaching computational thinking and creating,” says Rus. “Students can not only learn by coding and making their own robots, but by bringing to life conceptual ideas about what their robots can actually do.”

While the current version focuses on designs that can walk, the team hopes that in the future, the robots can take flight. Another goal is to have the user be able to go into the system and define the behavior of the robot in terms of tasks it can perform.

“This tool enables rapid exploration of dynamic robots at an early stage in the design process,” says Moritz Bächer, a research scientist and head of the computational design and manufacturing group at Disney Research. “The expert defines the building blocks, with constraints and composition rules, and paves the way for non-experts to make complex robotic systems. This system will likely inspire follow-up work targeting the computational design of even more intricate robots.”

This research was supported by the National Science Foundation’s Expeditions in Computing program.

Robotbenchmark lets you program simulated robots from your browser

Cyberbotics Ltd. is launching https://robotbenchmark.net to allow everyone to program simulated robots online for free.

Robotbenchmark offers a series of robot programming challenges that address various topics across a wide range of difficulty levels, from middle school to PhD. Users don’t need to install any software on their computer, cloud-based 3D robotics simulations run on a web page. They can learn programming by writing Python code to control robot behavior. The performance achieved by users is recorded and displayed online, so that they can challenge their friends and show off their skills at robot programming on social networks. Everything is designed to be extremely easy-to-use, runs on any computer, any web browser, and is totally free of charge.

This project is funded by Cyberbotics Ltd. and the Human Brain Project.

About Cyberbotics Ltd.: Cyberbotics is a Swiss-based company, spin-off from the École Polytechnique Fédérale de Lausanne, specialized in the development of robotics simulation software. It has been developing and selling the Webots software for more than 19 years. Webots is a reference software in robotics simulation being used in more than 1200 companies and universities across the world. Cyberbotics is also involved in industrial and research projects, such as the Human Brain Project.

About the Human Brain Project: The Human Brain Project is a large ten-year scientific research project that aims to build a collaborative ICT-based scientific research infrastructure to allow researchers across the globe to advance knowledge in the fields of neuroscience, computing, neurorobotics, and brain-related medicine. The Project, which started on 1 October 2013, is a European Commission Future and Emerging Technologies Flagship. Based in Geneva, Switzerland, it is coordinated by the École Polytechnique Fédérale de Lausanne and is largely funded by the European Union.

The Drone Center’s Weekly Roundup: 8/21/17

A U.S. Marine tests an Instant Eye drone during exercises on August 18 in Virginia. Credit: Lance Cpl. Michaela R. Gregory

August 14, 2017 – August 20, 2017

News

During a nighttime flight in the Persian Gulf, an Iranian surveillance drone followed a U.S. aircraft carrier and came within 300 feet of a U.S. fighter jet. It was the second time in a week that an Iranian drone interfered with U.S. Navy operations in the Gulf. In a statement, Iran’s Revolutionary Guard said that its drones were operated “accurately and professionally.” (Associated Press)

Commentary, Analysis, and Art

A report by the RAND Corporation argues that distributed, localized drone hubs can reduce energy consumption for drone delivery programs in urban centers. (StateScoop)

At the Economist, Tom Standage writes that toys like hobby drones can “sometimes give birth to important technologies.”

At the Atlantic, Naomi Nix looks at how the Kentucky Valley Educational Cooperative is investing in programs that teach students how to build and operate drones.

At Aviation Week, David Hambling examines the growing demand for small, pocket-sized military drones.

At Slate, Faine Greenwood argues that the U.S. military should not use consumer drones.

At the Los Angeles Times, W.J. Hennigan reports that U.S. drones are performing danger-close strikes in support of the Syrian Democratic Forces.

At TechCrunch, Jon Hegranes separates the “fiction from feasibility” of drone deliveries.

At DefenseNews, Adam Stone looks at how the U.S. Navy is investing in a command-and-control system with an eye to someday operating drones from carriers.  

At the Modern War Institute, Dan Maurer considers whether military ethics and codes should apply to robot soldiers.

At the San Francisco Chronicle, Benny Evangelista considers recent reports of close encounters between drones and manned aircraft.

At War is Boring, Robert Beckhusen writes that the Israeli military is investigating allegations that an Israeli drone manufacturer carried out a drone strike against Armenian soldiers as part of a product demonstration.

At Popular Mechanics, David Hambling considers whether an armed quadrotor drone is ethical or even practical.

At Washington Technology, Ross Wilkers looks into Boeing’s push to become a leader in the field of autonomous weapons systems.

Know Your Drone

Israeli firm Meteor Aerospace is developing a medium-altitude long-endurance surveillance and reconnaissance drone. (FlightGlobal)

Following the U.S. Army’s decision to discontinue use of its products, Chinese drone maker DJI is speeding the development of a security system that allows users to disconnect drones from DJI’s servers while in flight. (The New York Times)

Huntington Ingalls Industries demonstrated its Proteus dual-mode unmanned undersea vehicle at an exercise held by the U.S. Naval Surface Warfare Center. (Jane’s)

A team from the University of Sherbrooke is developing a fixed-wing drone that uses thrust to achieve perched landings. (IEEE Spectrum)

The U.S. Naval Research Laboratory is developing a fuel cell-powered drone called Hybrid Tiger that could have an endurance of up to three days. (Jane’s)

China’s Beijing Sifang Automation is developing an autonomous unmanned boat called SeaFly, which it hopes will be ready for production by the end of the year. (Jane’s)

The U.S. Defense Advanced Research Projects Agency unveiled the Assured Autonomy program, which seeks to build better trustworthiness into a range of military unmanned systems. (Shephard Media)

Taiwan’s National Chung-Shan Institute of Science and Technology unveiled an anti-radiation loitering munition drone. (Shephard Media)

Amazon has patented a retractable tube that can be used to funnel packages from delivery drones to the ground. (Puget Sound Business Journal)

Meanwhile, Wal-Mart has been awarded a patent for a floating warehouse that could be used to carry goods for drone deliveries. (CNBC)

Telecommunications company AT&T is looking to develop autonomous systems to make drones more efficient for cell tower inspections. (Unmanned Aerial Online)  

Drones at Work

The U.S. Forest Service used a drone to to collect data over the Minerva Fire in the Plumas National Forest. (Unmanned Aerial Online)

A team of researchers from Oklahoma State University and the University of Nebraska are planning to use drones to study atmospheric conditions during the upcoming solar eclipse. (Popular Science)

In a test, U.S. drone maker General Atomics flew its new Grey Eagle Extended Range drone for 42 hours. (Jane’s)

In a U.S. Navy exercise, an MQ-8B Fire Scout helicopter drone was handed off between control stations while in flight. (Shephard Media)

A U.S. MQ-1 Predator drone crashed shortly after taking off from Incirlik Air Base in Turkey. (Military.com)

The Michigan Department of Corrections announced that three people have been arrested after attempting to use a drone to smuggle drugs and a cellphone into a prison in the city of Ionia. (New York Post)

Meanwhile, Border Patrol agents in San Diego, California arrested a man for allegedly flying a drone laden with drugs over the U.S.-Mexico border. (The San Diego Tribune)

A medevac helicopter responding to a fatal car crash in Michigan had to abort its first landing attempt at the scene because a drone was spotted flying over the area. (MLive)

NASA plans to once again use its Global Hawk high-altitude drone to study severe storms over the Pacific this hurricane season. (International Business Times)

Police in Glynn County, Georgia used a drone to search for a suspect fleeing in a marshy area. (Florida Times-Union)

The Regina Police Service Traffic Unit in Canada is acquiring drones to collect data over collision scenes. (Global News)

A photo essay at the National Review examines quadrotor drones at work in both civilian and military spheres.

Industry Intel

DefenseNews reports that General Atomics is hoping to sell around 90 Avenger drones, the successor to the Reaper, to an unnamed international customer.

The Ohio Federal Research Network is behind a $7 million initiative to make Ohio a center for drone research. (Dayton Daily News)

The U.K.’s Defense Science and Technology Laboratory awarded Qinetiq a $5.8 million contract to lead the Maritime Autonomous Platform Exploitation project. (Shephard Media)

Insitu has partnered with FireWhat and Esri to provide firefighters with improved aerial intelligence. (Shephard Media)

3DR, Global Aerospace, and Harpenau Insurance have partnered to offer businesses drone insurance that covers legal liability and physical damage. (TechRepublic)

Aerialtronics, a Dutch industrial drone maker, announced that it has applied for a solvency procedure and will seek new investors. (Press Release)

The U.S. Navy awarded Insitu a $7.5 million foreign military sales contract for six ScanEagle drones for the Philippines. (DoD)

 The U.S. Navy awarded Insitu a $319,886 contract for the procurement of Strongback Module Assemblies.

The U.S. Air Force awarded Area I a $5 million contract for the development of air-launched drones. (FBO)

The U.S. Army awarded Gird Systems a $148,364 contract for squad-level counter-drone technology. (FBO)

The U.S. Army awarded Airspace Systems a $1.9 million contract for autonomous drone defense. (FBO)

The U.S. Army awarded RPX Technologies a $147,905 contract for a micro IR thermal imaging camera for nano-UAVs. (FBO)

The U.S. Department of Interior awarded Brocktek a $65,000 contract for 3DR Solo drones. (FBO)  

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Using machine learning to improve patient care

Credit: Shutterstock / MIT

Doctors are often deluged by signals from charts, test results, and other metrics to keep track of. It can be difficult to integrate and monitor all of these data for multiple patients while making real-time treatment decisions, especially when data is documented inconsistently across hospitals.

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) explore ways for computers to help doctors make better medical decisions.

One team created a machine-learning approach called “ICU Intervene” that takes large amounts of intensive-care-unit (ICU) data, from vitals and labs to notes and demographics, to determine what kinds of treatments are needed for different symptoms. The system uses “deep learning” to make real-time predictions, learning from past ICU cases to make suggestions for critical care, while also explaining the reasoning behind these decisions.

“The system could potentially be an aid for doctors in the ICU, which is a high-stress, high-demand environment,” says PhD student Harini Suresh, lead author on the paper about ICU Intervene. “The goal is to leverage data from medical records to improve health care and predict actionable interventions.”

Another team developed an approach called “EHR Model Transfer” that can facilitate the application of predictive models on an electronic health record (EHR) system, despite being trained on data from a different EHR system. Specifically, using this approach the team showed that predictive models for mortality and prolonged length of stay can be trained on one EHR system and used to make predictions in another.

ICU Intervene was co-developed by Suresh, undergraduate student Nathan Hunt, postdoc Alistair Johnson, researcher Leo Anthony Celi, MIT Professor Peter Szolovits, and PhD student Marzyeh Ghassemi. It was presented this month at the Machine Learning for Healthcare Conference in Boston.

EHR Model Transfer was co-developed by lead authors Jen Gong and Tristan Naumann, both PhD students at CSAIL, as well as Szolovits and John Guttag, who is the Dugald C. Jackson Professor in Electrical Engineering. It was presented at the ACM’s Special Interest Group on Knowledge Discovery and Data Mining in Halifax, Canada.

Both models were trained using data from the critical care database MIMIC, which includes de-identified data from roughly 40,000 critical care patients and was developed by the MIT Lab for Computational Physiology.

ICU Intervene

Integrated ICU data is vital to automating the process of predicting patients’ health outcomes.

“Much of the previous work in clinical decision-making has focused on outcomes such as mortality (likelihood of death), while this work predicts actionable treatments,” Suresh says. “In addition, the system is able to use a single model to predict many outcomes.”

ICU Intervene focuses on hourly prediction of five different interventions that cover a wide variety of critical care needs, such as breathing assistance, improving cardiovascular function, lowering blood pressure, and fluid therapy.

At each hour, the system extracts values from the data that represent vital signs, as well as clinical notes and other data points. All of the data are represented with values that indicate how far off a patient is from the average (to then evaluate further treatment).

Importantly, ICU Intervene can make predictions far into the future. For example, the model can predict whether a patient will need a ventilator six hours later rather than just 30 minutes or an hour later. The team also focused on providing reasoning for the model’s predictions, giving physicians more insight.

“Deep neural-network-based predictive models in medicine are often criticized for their black-box nature,” says Nigam Shah, an associate professor of medicine at Stanford University who was not involved in the paper. “However, these authors predict the start and end of medical interventions with high accuracy, and are able to demonstrate interpretability for the predictions they make.”

The team found that the system outperformed previous work in predicting interventions, and was especially good at predicting the need for vasopressors, a medication that tightens blood vessels and raises blood pressure.

In the future, the researchers will be trying to improve ICU Intervene to be able to give more individualized care and provide more advanced reasoning for decisions, such as why one patient might be able to taper off steroids, or why another might need a procedure like an endoscopy.

EHR Model Transfer

Another important consideration for leveraging ICU data is how it’s stored and what happens when that storage method gets changed. Existing machine-learning models need data to be encoded in a consistent way, so the fact that hospitals often change their EHR systems can create major problems for data analysis and prediction.

That’s where EHR Model Transfer comes in. The approach works across different versions of EHR platforms, using natural language processing to identify clinical concepts that are encoded differently across systems and then mapping them to a common set of clinical concepts (such as “blood pressure” and “heart rate”).

For example, a patient in one EHR platform could be switching hospitals and would need their data transferred to a different type of platform. EHR Model Transfer aims to ensure that the model could still predict aspects of that patient’s ICU visit, such as their likelihood of a prolonged stay or even of dying in the unit.

“Machine-learning models in health care often suffer from low external validity, and poor portability across sites,” says Shah. “The authors devise a nifty strategy for using prior knowledge in medical ontologies to derive a shared representation across two sites that allows models trained at one site to perform well at another site. I am excited to see such creative use of codified medical knowledge in improving portability of predictive models.”

With EHR Model Transfer, the team tested their model’s ability to predict two outcomes: mortality and the need for a prolonged stay. They trained it on one EHR platform and then tested its predictions on a different platform. EHR Model Transfer was found to outperform baseline approaches and demonstrated better transfer of predictive models across EHR versions compared to using EHR-specific events alone.

In the future, the EHR Model Transfer team plans to evaluate the system on data and EHR systems from other hospitals and care settings.

Both papers were supported, in part, by the Intel Science and Technology Center for Big Data and the National Library of Medicine. The paper detailing EHR Model Transfer was additionally supported by the National Science Foundation and Quanta Computer, Inc.

Robots Podcast #241: Tensegrity Control, with Kostas Bekris

In this episode, Jack Rasiel speaks with Kostas Bekris, who introduces us to tensegrity robotics: a striking robotic design which straddles the boundary between hard and soft robotics. A structure uses tensegrity if it is made of a number of isolated rigid elements which are held in compression by a network of elements that are in tension. Bekris, an Associate Professor of Computer Science, draws from a diverse set of problems to find innovative new ways to control tensegrity robots.

Kostas Bekris

Kostas Bekris, Associate Professor of Computer Science at Rutgers University

Kostas Bekris is an Associate Professor of Computer Science at Rutgers, the State University of New Jersey. He is working in the area of algorithmic robotics, especially on problems related to robot motion planning and coordination. He received his PhD from Rice University in 2008 under the guidance of Lydia Kavraki. He was an Assistant Professor at the University of Nevada, Reno until 2012. His research has been supported by NSF, NASA, the DoD and DHS, including an NASA Early Career Faculty award.

 

Links:

New Horizon 2020 robotics projects, 2016: ILIAD

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). 

EuRobotics regularly publishes video interviews with projects, so that you can find out more about their activities. This week features ILIAD: ntra-Logistics with Integrated Automatic Deployment: Safe and Scalable Fleets in Shared Spaces

Objectives

ILIAD is driven by the industry needs for highly flexible robot fleets operating in spaces shared with humans. The main objectives are care-free, fast, and scalable deployment; long-term operation while learning from observed activities; on-line, self-optimising fleet management; human-aware fleets that can learn human behaviour models; compliant unpacking and palletising of goods; and a systematic study of human safety in shared environments, setting the stage for future safety certification.

Expected Impact

ILIAD’s focus is on the rapidly expanding intralogistics domain, where there is a strong market pull for flexible automated solutions, especially ones that can blend with current operations. The innovations developed in ILIAD target key hindrances identified in the logistics domain, and are essential for independent and reliable operation of collaborative AGV fleets. The expected impact extends to most multiple-actor systems where robots and humans operate together.

Partners

ÖREBRO UNIVERSITET
UNIVERSITY OF LINCOLN
UNIVERSITÀ DI PISA
LEIBNIZ UNIVERSITÄT HANNOVER
ROBERT BOSCH GMBH
KOLLMORGEN AUTOMATION AB
ACT OPERATIONS RESEARCH
ORKLA FOODS
LOGISTIC ENGINEERING SERVICES LTD

Coordinator:

Achim J. Lilienthal

Project website:

http://www.iliad-project.eu/

Watch all EU-projects videos

If you enjoyed reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Digital symbiosis lets robot co-workers predict human behaviour

Robot co-workers could help out with repetitive jobs and heavy lifting by reacting to human actions. Image credit – Italian Institute of Technology

by Anthony King
Stephen Hawking and Elon Musk fear that the robotic revolution may already be underway, but automation isn’t going to take over just yet – first machines will work alongside us.

Robots across the world help out in factories by taking on heavy lifting or repetitive jobs, but the walking, talking kind may soon collaborate with people, thanks to European robotics researchers building prototypes that anticipate human actions.

‘Ideally robots should be able to sense interactional forces, like carrying a table with someone,’ said Francesco Nori, who coordinates the EU-funded An.Dy project which aims to advance human-robot collaboration. ‘(Robots) need to know what the human is about to do and what they can do to help.’

In any coordinated activity, whether dancing or lifting a table together, timing is crucial and that means a robot needs to anticipate before a person acts.

‘Today, robots just react – half a second of anticipation might be enough,’ said Nori, who works at the Italian Institute of Technology which is renowned for its humanoid robot called iCub, that will be educated in human behaviour from data collected during the An.Dy project.

The data will flow from a special high-tech suit that lies at the heart of the project – the AndySuit. This tight suit is studded with sensors to track movement, acceleration of limbs and muscle power as a person performs actions alone or in combination with a humanoid robot.

A special high-tech suit known as the AndySuit allows a person to perform actions alongside a robot. Image credit – Italian Institute of Technology

This sends data to a robot similar to iCub so that it can recognise what the human is doing and predict the next action just ahead of time. The collaborative robot – also known as a cobot – would then be programmed to support the worker.

‘The robot would recognise a good posture and a bad posture and would work so that it gives you an object in the right way to avoid injury,’ explained Nori, adding that the cobot would adapt its own actions to maximise the comfort of the human.

The robot’s capabilities will come from its library of pre-programmed models of human movement, but also from internal sensors and a mobile phone app. Special sensors that communicate with the iCub are also being developed for the AndySuit, but at the moment it is more appropriate for the robotics lab rather than a factory floor.

To get the robot and AndySuit closer to commercialisation it will be tested in three different scenarios. First, in a workspace where a person works beside a cobot. Second, when a person wears an exoskeleton, which could be useful for workers who must lift heavy loads and can be assisted by a robust metal skeleton around them.

A third scenario will be where a humanoid robot offers assistance and could take turns performing tasks. In this situation, the robot would look like the archetype sci-fi robot; like Sonny from the film iRobot.

Silicon sidekick

A different project will see a human-like prototype robot reach out a helping hand to support technicians, under an EU-funded project called SecondHands led by Ocado Technology in the UK.

‘Ask it to pass the screw driver, and it will respond asking whether you meant the one on the table or in the toolbox.’ Duncan Russel, Ocado Technology

Ocado runs giant automated warehouses that fulfil grocery orders. Its warehouse in Hatfield, north of London, is the size of several football fields and must be temporarily shut down for regular maintenance.

Duncan Russell, research coordinator at Ocado Technology explained: ‘Parts need to be cleaned and parts need replacing. The robot system is being designed to help the technicians with those tasks.’

While the technician stands on a ladder, a robot below would watch what they are doing and provide the next tool or piece of equipment when asked.

‘The robot will understand instructions in regular language – it will be cleverer than you might expect,’ said Russell. ‘Ask it to pass the screw driver, and it will respond asking whether you meant the one on the table or in the toolbox.’

The robot will feature capabilities straight from the Inspector Gadget cartoon series. An extendable torso will allow it to move upwards and telescopic limbs will give it a three metre plus reach.

‘The arm span is 3.1 metres and the torso is around 1.8 metres, which gives it a dynamic reach. This will allow it to offer assistance to technicians up on a ladder,’ said Russell.

This futuristic scenario is being brought to reality by research partners around Europe. Robotics experts at Karlsruhe Institute of Technology in Germany have built a wheeled prototype robot. The plan is for a bipedal robot to be tested in the Ocado robots lab in Hatfield, and for it to be transferred to the warehouse floor for a stint with a real technician.

Karlsruhe is also involved in teaching the robot natural language and together with the Swiss Federal Institute of Technology in Lausanne it is developing a grasping hand, so the helper robot can wield tools with care. The visions system of this silicon sidekick is being developed by researchers at University College London, UK.

The handy robotic helper could also do cross-checking for the maintenance person, perhaps offering a reminder if a particular step is missed, for example.

‘The technician will get more done and faster, so that the shutdown times for maintenance can be shortened,’ said Russell.

More info:
An.Dy
SecondHands

Robotics and AI celebrated in this year’s MIT Technology Review 35 Innovators Under 35 list

Credit: MIT Tech Review

13 researchers working in robotics and AI made the MIT Technology Review “35 Innovators Under 35” list this year.

Robotics

Anca Dragan
UC Berkeley
Ensuring that robots and humans work and play well together.

Lorenz Meier
ETHZ
An open-source autopilot for drones.

Austin Russell
Luminar
Better sensors for safer automated driving.

Angela Schoellig
University of Toronto
Her algorithms are helping self-driving and self-flying vehicles get around more safely.

Jianxiong Xiao
AutoX
His company AutoX aims to make self-driving cars more accessible.

AI

Greg Brockman
OpenAI
Trying to make sure that AI benefits humanity.

Joshua Browder
DoNotPay
Using chatbots to help people avoid legal fees.

Ian Goodfellow
Google Brain
Invented a way for neural networks to get better by working together.

Volodymyr Mnih
DeepMind
The first system to play Atari games as well as a human can.

Olga Russakovsky
Princeton University
Employed crowdsourcing to vastly improve computer-vision system.

Gang Wang
Alibaba
At the forefront of turning AI into consumer-ready products.

Gregory Wayne
DeepMind
Using an understanding of the brain to create smarter machines.

Jenna Wiens
University of Michigan
Her computational models identify patients who are most at risk of a deadly infection.

Page 385 of 399
1 383 384 385 386 387 399