Page 336 of 386
1 334 335 336 337 338 386

How to control robots with brainwaves and hand gestures

A system developed at MIT allows a human supervisor to correct a robot’s mistakes using gestures and brainwaves.
Photo: Joseph DelPreto/MIT CSAIL
By Adam Conner-Simons

Getting robots to do things isn’t easy: Usually, scientists have to either explicitly program them or get them to understand how humans communicate via language.

But what if we could control robots more intuitively, using just hand gestures and brainwaves?

A new system spearheaded by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to do exactly that, allowing users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.

Building off the team’s past work focused on simple binary-choice activities, the new work expands the scope to multiple-choice tasks, opening up new possibilities for how human workers could manage teams of robots.

By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.

The team demonstrated the system on a task in which a robot moves a power drill to one of three possible targets on the body of a mock plane. Importantly, they showed that the system works on people it’s never seen before, meaning that organizations could deploy it in real-world settings without needing to train it on users.

“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback,” says CSAIL Director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”

PhD candidate Joseph DelPreto was lead author on a paper about the project alongside Rus, former CSAIL postdoc Andres F. Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin M. Hasani, and Boston University Professor Frank H. Guenther. The paper will be presented at the Robotics: Science and Systems (RSS) conference taking place in Pittsburgh next week.

In most previous work, systems could generally only recognize brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. For instance, a human operator might have to look at different light displays that correspond to different robot tasks during a training session.

Not surprisingly, such approaches are difficult for people to handle reliably, especially if they work in fields like construction or navigation that already require intense concentration.

Meanwhile, Rus’ team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.

“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” says DelPreto. “The machine adapts to you, and not the other way around.”

For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.

To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.

Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.

“By looking at both muscle and brain signals, we can start to pick up on a person’s natural gestures along with their snap decisions about whether something is going wrong,” says DelPreto. “This helps make communicating with a robot more like communicating with another person.”

The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.

“We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”

How to control robots with brainwaves and hand gestures

A system developed at MIT allows a human supervisor to correct a robot’s mistakes using gestures and brainwaves.
Photo: Joseph DelPreto/MIT CSAIL
By Adam Conner-Simons

Getting robots to do things isn’t easy: Usually, scientists have to either explicitly program them or get them to understand how humans communicate via language.

But what if we could control robots more intuitively, using just hand gestures and brainwaves?

A new system spearheaded by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to do exactly that, allowing users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.

Building off the team’s past work focused on simple binary-choice activities, the new work expands the scope to multiple-choice tasks, opening up new possibilities for how human workers could manage teams of robots.

By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.

The team demonstrated the system on a task in which a robot moves a power drill to one of three possible targets on the body of a mock plane. Importantly, they showed that the system works on people it’s never seen before, meaning that organizations could deploy it in real-world settings without needing to train it on users.

“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback,” says CSAIL Director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”

PhD candidate Joseph DelPreto was lead author on a paper about the project alongside Rus, former CSAIL postdoc Andres F. Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin M. Hasani, and Boston University Professor Frank H. Guenther. The paper will be presented at the Robotics: Science and Systems (RSS) conference taking place in Pittsburgh next week.

In most previous work, systems could generally only recognize brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. For instance, a human operator might have to look at different light displays that correspond to different robot tasks during a training session.

Not surprisingly, such approaches are difficult for people to handle reliably, especially if they work in fields like construction or navigation that already require intense concentration.

Meanwhile, Rus’ team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.

“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” says DelPreto. “The machine adapts to you, and not the other way around.”

For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.

To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.

Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.

“By looking at both muscle and brain signals, we can start to pick up on a person’s natural gestures along with their snap decisions about whether something is going wrong,” says DelPreto. “This helps make communicating with a robot more like communicating with another person.”

The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.

“We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”

What is artificial intelligence? (Or, can machines think?)

Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils.

I start the keynote with Alan Turing’s famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin’s Thinker but also the largely unconscious thinking required to make a pot of tea. I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we’ve had computers that can expertly play chess for over 20 years, but we can’t yet build a robot that could go into your kitchen and make you a cup of tea (see also the Wozniak coffee test).

In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence. I ask where would a robot vacuum cleaner be on this scale and propose that such a robot is about as smart as an e-coli (single celled organism). I then illustrate the difficulty of placing the Actroid robot on this scale because, although it may look convincingly human (from a distance), in reality the robot is not very much smarter than a washing machine (and I hint that this is an ethical problem).

In slide 7 I show how apparently intelligent behaviour doesn’t require a brain, with the Solarbot. This robot is an example of a Braitenberg machine. It has two solar panels (which look a bit like wings) acting as both sensors and power sources; the left hand panel is connected to the right hand wheel and vice versa. These direct connections mean that Solarbot can move towards the light and even navigate its way through obstacles, thus showing that intelligent behaviour is an emergent property of the interactions between body and environment.

In slide 8 I ask the question: What is the most advanced AI in the world today? (A question I am often asked.) Is it for example David Hanson’s robot Sophia (which some press reports have claimed as the world’s most advanced)? I argue it is not, since it is a chatbot AI – with a limited conversational repertoire – with a physical body (imagine Alexa with a humanoid head). Is it the DeepMind AI AlphaGo which famously beat the world’s best Go player in 2016? Although very impressive I again argue no since AlphaGo cannot do anything other than play Go. Instead I suggest that everyday Google might well be the world’s most advanced AI (on this I agree with my friend Joanna Bryson). Google is in effect a librarian able to find a book from an immense library for you – on the basis of your ill formed query – more or less instantly! (And this librarian is poly lingual too.)

In slides 9 I make the point that intelligence is not one thing that animals, robots and AIs have more or less of (in other words the linear scale shown on slides 5 and 6 is wrong). Then in slides 10 – 13 I propose four distinct categories of intelligence: morphological, swarm, individual and social intelligence. I suggest in slides 14 – 16 that if we express these as four axes of a graph then we can (very approximately) compare the intelligence of different organisms, including humans. In slide 17 I show some robots and argue that this graph shows why robots are so unintelligent; it is because robots generally only have two of the four kinds of intelligence whereas animals typically have three or sometimes all four. A detailed account of these ideas can be found in my paper How intelligent is your intelligent robot?

In the next segment, slides 18-20 I ask: how do we make Artificial General Intelligence (AGI)? I suggest that the key difference between current narrow AI and AGI is the ability – which comes very naturally to humans – to generalise knowledge learned in one context to a completely different context. This I think is the basis of human creativity. Using Data from Star Trek the next generation as a SF example of an AGI with human-equivalent intelligence as what we might be aiming for in the quest for AGI I explain that there are 3 approaches to getting there: by design, using artificial evolution or by reverse engineering animals. I offer the opinion that the gap between where we are now and Data like AGI is about the same as the gap between current space craft engine technology and warp drive technology. In other words not any time soon.

In the fourth segment of the talk (slides 21-24) I give a very brief account of evolutionary robotics – a method for breeding robots in much the same way farmers have artificially selected new varieties of plants and animals for thousands of years. I illustrate this with the wonderful Golem project which, for the first time, evolved simple creatures then 3D printed the most successful ones. I then introduce our new four year EPSRC funded project Autonomous Robot Evolution: from cradle to grave. In a radical new approach we aim to co-evolve robot bodies and brains in real-time and real-space. Using techniques from 3D printing new robot designs will literally be printed, before being trained in a nursery, then fitness tested in a target environment. With this approach we hope to be able to evolve robots for extreme environments, however because the energy costs are so high I do not think evolution is a route to truly thinking machines.

In the final segment (slides 25-35) I return to the approach of trying to design rather than evolve thinking machines. I introduce the idea of embedding a simulation of a robot in that robot, so that it has the ability to internally model itself. The first example I give is the amazing anthropomimetic robot invented by my old friend Owen Holland, called ECCEROBOT. Eccerobot is able to learn how to control it’s own very complicated and hard-to-control body by trying out possible movement sequences in its internal model (Owen calls this a ‘functional imagination’). I then outline our own work to use the same principle – a simulation based internal model – to demonstrate simple ethical behaviours, first with e-puck robots, then with NAO robots. These experiments are described in detail here and here. I suggest that these robots – with their ability to model and predict the consequences of their own and others’ actions, in other words anticipate the future – may represent the first small steps toward thinking machines.


Related blog posts:
60 years of asking can robot think?

How intelligent are intelligent robots?

Robot bodies and how to evolve them

What is artificial intelligence? (Or, can machines think?)

Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils.

I start the keynote with Alan Turing’s famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin’s Thinker but also the largely unconscious thinking required to make a pot of tea.

I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we’ve had computers that can expertly play chess for 20 years, but we can’t yet build a robot that could go into your kitchen and make you a cup of tea.

In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence. I ask where would a robot vacuum cleaner be on this scale and propose that such a robot is about as smart as an e-coli (single celled organism). I then illustrate the difficulty of placing the Actroid robot on this scale because, although it may look convincingly human (from a distance), in reality the robot is not very much smarter than a washing machine (and I hint that this is an ethical problem).

In slide 7 I show how apparently intelligent behaviour doesn’t require a brain, with the Solarbot. This robot is an example of a Braitenberg machine. It has two solar panels (which look a bit like wings) acting as both sensors and power sources; the left hand panel is connected to the right hand wheel and vice versa. These direct connections mean that Solarbot can move towards the light and even navigate its way through obstacles, thus showing that intelligent behaviour is an emergent property of the interactions between body and environment.

In slide 8 I ask the question: What is the most advanced AI in the world today? (A question I am often asked.) Is it for example David Hanson’s robot Sophia (which some press reports have claimed as the world’s most advanced)? I argue it is not, since it is a chatbot AI – with a limited conversational repertoire – with a physical body (imagine Alexa with a humanoid head). Is it the DeepMind AI AlphaGo which famously beat the world’s best Go player in 2016? Although very impressive I again argue no since AlphaGo cannot do anything other than play Go. Instead I suggest that everyday Google might well be the world’s most advanced AI (on this I agree with my friend Joanna Bryson). Google is in effect a librarian able to find a book from an immense library for you – on the basis of your ill formed query – more or less instantly! (And this librarian is poly lingual too.)

In slides 9 I make the point that intelligence is not one thing that animals, robots and AIs have more or less of (in other words the linear scale shown on slides 5 and 6 is wrong). Then in slides 10 – 13 I propose four distinct categories of intelligence: morphological, swarm, individual and social intelligence. I suggest in slides 14 – 16 that if we express these as four axes of a graph then we can (very approximately) compare the intelligence of different organisms, including humans. In slide 17 I show some robots and argue that this graph shows why robots are so unintelligent; it is because robots generally only have two of the four kinds of intelligence whereas animals typically have three or sometimes all four. A detailed account of these ideas can be found in my paper How intelligent is your intelligent robot?

In the next segment, slides 18-20 I ask: how do we make Artificial General Intelligence (AGI)? I suggest that the key difference between current narrow AI and AGI is the ability – which comes very naturally to humans – to generalise knowledge learned in one context to a completely different context. This I think is the basis of human creativity. Using Data from Star Trek the next generation as a SF example of an AGI with human-equivalent intelligence as what we might be aiming for in the quest for AGI I explain that there are 3 approaches to getting there: by design, using artificial evolution or by reverse engineering animals. I offer the opinion that the gap between where we are now and Data like AGI is about the same as the gap between current space craft engine technology and warp drive technology. In other words not any time soon.

In the fourth segment of the talk (slides 21-24) I give a very brief account of evolutionary robotics – a method for breeding robots in much the same way farmers have artificially selected new varieties of plants and animals for thousands of years. I illustrate this with the wonderful Golem project which, for the first time, evolved simple creatures then 3D printed the most successful ones. I then introduce our new four year EPSRC funded project Autonomous Robot Evolution: from cradle to grave. In a radical new approach we aim to co-evolve robot bodies and brains in real-time and real-space. Using techniques from 3D printing new robot designs will literally be printed, before being trained in a nursery, then fitness tested in a target environment. With this approach we hope to be able to evolve robots for extreme environments, however because the energy costs are so high I do not think evolution is a route to truly thinking machines.

In the final segment (slides 25-35) I return to the approach of trying to design rather than evolve thinking machines. I introduce the idea of embedding a simulation of a robot in that robot, so that it has the ability to internally model itself. The first example I give is the amazing anthropomimetic robot invented by my old friend Owen Holland, called ECCEROBOT. Eccerobot is able to learn how to control it’s own very complicated and hard-to-control body by trying out possible movement sequences in its internal model (Owen calls this a ‘functional imagination’). I then outline our own work to use the same principle – a simulation based internal model – to demonstrate simple ethical behaviours, first with e-puck robots, then with NAO robots. These experiments are described in detail here and here. I suggest that these robots – with their ability to model and predict the consequences of their own and others’ actions, in other words anticipate the future – may represent the first small steps toward thinking machines.

Mouser Electronics – TE Connectivity HDC Dynamic Module

TE Connectivity's HDC Dynamic Module integrates the Dynamic series flexible signal and power solutions and the HDC Heavy Duty Connector series to form a solution of harsh environment connectors. TE's HDC Dynamic Module offers the top features of the two series. It uses the contact concept of the Dynamic series, with its proven performance in industrial uses and its cost effectiveness compared to legacy cutting contacts. The HDC connectors make the module a reliable solution for harsh environments. TE's HDC Dynamic Module supports 2A/32V to 40A/300V performance and 3 positions to 48 positions.
Page 336 of 386
1 334 335 336 337 338 386