Page 280 of 341
1 278 279 280 281 282 341

Simulations are the key to intelligent robots

I read an article entitled Games Hold the Key to Teaching Artificial Intelligent Systems, by Danny Vena, in which the author states that computer games like Minecraft, Civilization, and Grand Theft Auto have been used to train intelligent systems to perform better in visual learning, understand language, and collaborate with humans. The author concludes that games are going to be a key element in the field of artificial intelligence in the near future. And he is almost right.

In my opinion, the article only touches the surface of artificial intelligence by talking about games. Games have been a good starting point for the generation of intelligent systems that outperform humans, but going deeper into the realm of robots that are useful in human environments will require something more complex than games. And I’m talking about simulations.

Using games to bootstrap intelligence

The idea behind beating humans at games has been in artificial intelligence since its birth. Initially, researchers created programs to beat humans at Tic Tac Toe and Chess (like, for example, IBM’s DeepBlue). However, those games’ intelligence was programmed from scratch by human minds. There were people writing the code that decided which move should be the next one. However, that manual approach to generate intelligence reached a limit: intelligence is so complex that we realized that it may be too difficult to manually write a program that emulates it.

Then, a new idea was born: what if we create a system that learns by itself? In that case, the engineers will only have to program the learning structures and set the proper environment to allow intelligence to bootstrap by itself.

AlphaGo beats Lee Sedol

AlphaGo beats Lee Sedol
(photo credit: intheshortestrun)

The results of that idea are programs that learn to play the games better than anyone in the world, even if nobody explains to the program how to play in the first place. For example, Google’s DeepMind company created AlphaGo Zero program uses that approach. The program was able to beat the best players of Go in the world. The company used the same approach to create programs that learnt to play Atari games, starting from zero knowledge. Recently, OpenAI used this approach for their bot program that beats pro players of the Dota 2 game. By the way, if you want to reproduce the results of the Atari games, OpenAI released the OpenAI Gym, containing all the code to start training your system with Atari games, and compare the performance against other people.

What I took from those results is that the idea of making an intelligent system generate intelligence by itself is a good approach, and that the algorithms used for teaching can be used for making robots learn about their space (I’m not so optimistic about the way to encode the knowledge and to set the learning environment and stages, but that is another discussion).

From games to simulations

OpenAI wanted to go further. Instead of using games to generate programs that can play a game, they applied the same idea to make a robot do something useful: learn to manipulate a cube on its hand. In this case, instead of using a game, they used a simulation of the robot. The simulation was used to emulate the robot and its environment as if it were a real one. Then, they allowed the algorithm to control the simulated robot and make the robot learn about the task to solve by using domain randomization. After many trials, the simulated robot was able to manipulate the block in the expected way. But that was not all! At the end of the article, the authors successfully transferred the learned control program of the simulated robot to a real robot, which performed in a way similar to the simulated one. Except it was real.

Simulated Hand OpenAI Manipulation Experiments

Simulated Hand OpenAI Manipulation Experiment (image credit: OpenAI)

Real Hand OpenAI Manipulation Experiment

Real Hand OpenAI Manipulation Experiment (photo credit:OpenAI)

A similar approach was applied by OpenAI to a Fetch robot trained to grasp a spam box off of a table filled with different objects. The robot was trained in simulation and it was successfully transferred to the real robot.

OpenAI teaches Fetch robot to grasp from table using simulations (photo credit: OpenAI)

OpenAI teaches Fetch robot to grasp from table using simulations (photo credit: OpenAI)

We are getting close to the holy grail in robotics, a complete self-learning system!

Training robots

However, in their experiments, engineers from OpenAI discovered that training for robots is a lot more complex than training algorithms for games. Meanwhile, in games, the intelligent system has a very limited list of actions and perceptions available; robots face a huge and continuous spectrum in both domains, actions and perceptions. We can say that the options are infinite.

That increase in the number of options diminishes the usefulness of the algorithms used for RL. Usually, the way to deal with the problem is with some artificial tricks, like discarding some of the information completely or discretizing the data values artificially, reducing the options to only a few.

OpenAI engineers found that even if the robots were trained in simulations, their approach could not scale to more complex tasks.

Massive data vs. complex learning algorithms

As Andrew Ng indicated, and as an engineer from OpenAI personally indicated to me based on his results, massive data with simple learning algorithms wins over complicated algorithms with a small amount of data. This means that it is not a good idea to try to focus on getting more complex learning algorithms. Instead, the best approach for reaching intelligent robotics would be to use simple learning algorithms trained with massive amounts of data (which makes sense if we observe our own brain: a massive amount of neural networks trained over many years).

Google has always known that. Hence, in order to obtain massive amounts of data to train their robots, Google created a real life system with real robots, training all day long in a large space. Even if it is a clever idea, we can all see that this is not practical in any sense for any kind of robot and application (breaking robots, limited to execution in real time, a limited amount of robots, a limited amount of environments, and so on…).

Google robots training for grasping

Google robots training for grasping

That leads us to the same solution again: to use simulations. By using simulations, we can put any robot in any situation and train them there. Also, we can have virtually an infinite number of them training in parallel, and generate massive amounts of data in record time.

Even if that approach looks very clear right now, it was not three years ago when we created our company, The Construct, around robot simulations in the cloud. I remember exhibiting at the Innorobo 2015 exhibition and finding, after extensive interviews among all the other exhibitors, that only two among them were using simulations for their work. Furthermore, roboticists considered simulations to be something nasty to be avoided at all cost, since nothing can compare with the real robot (check here for a post I wrote about it at the time).

Thankfully, the situation has changed since then. Now, using simulations for training real robots is starting to become the way.

Transferring to real robots

We all know that it is one thing to get a solution with the simulation and another for that solution to work on the real robot. Having something done by the robot in the simulation doesn’t imply that it will work the same way on the real robot. Why is that?

Well, there is something called the reality gap. We can define the reality gap as the difference between the simulation of a situation and the real-life situation. Since it is impossible for us to simulate something to a perfect degree, there will always be differences between simulation and reality. If the difference is big enough, it may happen that the results obtained in the simulator are not relevant at all. That is, you have a big reality gap, and what applies in the simulation does not apply to the real world.

That problem of the reality gap is one of the main arguments used to discard simulators for robotics. And in my opinion, the path to follow is not to discard the simulators and find something else, but instead to find solutions to cross that reality gap. As for solutions, I believe we have two options:

1. Create more accurate simulators. That is on its way. We can see efforts in this direction. Some simulators concentrate on better physics (Mujoco); others on a closer look at reality (Unreal or Unity-based simulators, like Carla or AirSim). We can expect that as computer power continues to increase, and cloud systems become more accessible, the accuracy of simulations is going to keep increasing in both senses, physics and looks.

2. Build better ways to cross the reality gap. In its original work, Noise and the reality gap, Jakobi (the person who identified the problem of the reality gap) indicated that one of the first solutions is to make a simulation independent of the reality gap. His idea was to introduce noise in those variables that are not relevant to the task. The modern version of that noise introduction is the concept of domain randomization, as described in the paper Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World.

Domain randomization basically consists of performing the training of the robot in a simulated environment where its non-relevant-to-the-task features are changed randomly, like the colors of the elements, the light conditions, the relative position to other elements, etc.

The goal is to make the trained algorithm be unaffected by those elements in the scene that provide no information to the task at hand, but which may confuse it (because the algorithm doesn’t know which parts are the relevant ones to the task). I can see domain randomization as a way to tell the algorithm where to focus its attention, in terms of the huge flow of data that it is receiving.

Domain randomization applied by OpenAI to train a Fetch robot in simulation

Domain randomization applied by OpenAI to train a Fetch robot in simulation (photo credit: OpenAI)

In more recent works, the OpenAI team has released a very interesting paper that improves domain randomization. They introduce the concept of dynamics randomization. In this case, it is not the environment that is changing in the simulation, but the properties of the robot (like its mass, distance between grippers, etc.). The paper is Sim-to-real transfer of robotic control with dynamics randomization. That is the approach that OpenAI engineers took to successfully achieve the manipulation robot.

Some software that goes on the line

What follows is a list of software that allows the training of robots in simulations. I’m not including just robotics simulators (like Gazebo, Webots, and V-Rep) because they are just that, robot simulators in the general sense. The software listed here goes one step beyond that and provides a more complete solutions for doing the training in simulations. Of course, I have discarded the system used by OpenAI (which is Mujoco) because it requires the building of your own development environment.

Carla

Carla is an open source simulator for self-driving cars based on Unreal Engine. It has recently included a ROS bridge.

Carla simulator (photo credit Carla)

Carla simulator (photo credit Carla)

Microsoft Airsim

Microsoft Airsim drones simulator follows a similar approach to Carla, but for drones. Recently, they updated the simulator to also include self-driving cars.

Airsim

Airsim (photo credit: Microsoft)

Nvidia Isaac

Nvidia Isaac aims to be a complete solution for training robots on simulations and then transferring to real robots. There is still nothing available, but they are working on it.

Isaac

Isaac (photo credit: Nvidia)

ROS Development Studio

The ROS Development Studio is the development environment that our company created, and it has been conceived from the beginning to simulate and train any ROS-based robot, requiring nothing to be installed in your computer (cloud-based). Simulations for the robots are already provided with all the ROS controllers up and running, as well as the machine learning tools. It includes a system of Gym cloud computers for the parallel training of robots on an infinite number of computers.

ROS Development Studio showing an industrial environment

ROS Development Studio showing an industrial environment

Here is a video showing a simple example of training two cartpoles in parallel using the Gym computers inside the ROS Development Studio:

 

(Readers, if you know other software like this that I can add, let me know.)

Conclusion

Making all those deep neural networks learn in a training simulation is the way to go, and as we may see in the future, this is just the tip of the iceberg. My personal opinion is that intelligence is yet more embodied than current AI approaches admit: you cannot have intelligence without a body. Hence, I believe that the use of simulated embodiments will be even higher in the future. We’ll see.

Memory-jogging robot to keep people sharp in ‘smart’ retirement homes

Sensors placed throughout a retirement home helped the ENRICHME robot to keep track of the movements and activities of residents taking part in the project’s trial. Image credit – ENRICHME

by Steve Gillman

Almost a fifth of the European population are over 65 years old, but while quality of life for this age bracket is better than ever before, many will at some point suffer from a decline in their mental abilities.

Without adequate care, cognitive abilities can decline quicker, yet with the right support people can live longer, healthier and more independent lives. Researchers working on a project called ENRICHME have attempted to address this by developing a robotic assistant to help improve mental acuity.

‘One of the big problems of mild cognitive impairment is temporary memory loss,’ said Dr Nicola Bellotto from the School of Computer Science at the University of Lincoln in the UK and one of the principal investigators of ENRICHME.

‘The goal of the project was to assist and monitor people with cognitive impairments and offer basic interactions to help a person maintain their cognitive abilities for longer.’

The robot moves around a home providing reminders about medication as well as offering regular physical and mental exercises – it can even keep tabs on items that are easily misplaced.

A trial was conducted with the robot in three retirement homes in England, Greece and Poland. At each location the robot helped one or more residents, and was linked to sensors placed throughout the building to track the movements and activities of those taking part.

‘All this information was used by the robot,’ said Dr Bellotto. ‘If a person was in the bedroom or kitchen the robot could rely on the sensors to know where the person is.’

The robot was also kitted out with a thermal camera so it could measure the temperature of a person in real time, allowing it to estimate the levels of their respiration and heartbeat. This could reveal if someone was experiencing high levels of stress related to a particular activity and inform the robot to act accordingly.

This approach is based around a principle called ambient assisted living, which combines technology and human carers to improve support for older people. In ENRICHME’s case, Dr Bellotto said their robots could be operated by healthcare professionals to provide tailored care for elderly patients.

The users involved in the trials showed high level of engagement with the robot they were living with – even naming it Alfie in one case – and also provided good feedback, said Dr Bellotto. But he added that it is still a few years away from being rolled out to the wider public.

‘Some of the challenges we had were how to approach the right person in these different environments because sometimes a person lives with multiple people, or because the rooms are small and cluttered and it is simply not possible for the robot to move safely from one point to another,’ he said.

Dr Bellotto and his team are now applying for funding for new projects to solve the remaining technical problems, which hopefully one day will help them take the robot one step closer to commercialisation.

This type of solution would help increase people’s physical and psychological wellbeing, which could help to reduce public spending on care for older people. In 2016 the total cost of ageing in the EU was 25% of GDP, and this figure is expected to rise in the coming decades.

Ageing populations

‘One of the big challenges we have in Europe is the high number of elderly people that the public health system has to deal with,’ said Professor María Inés Torres, a computer scientist from the University of the Basque Country in Spain.

Older people can fall into bad habits for a variety of reasons, such as being unable to go for walks and cook healthy meals because of physical limitations. The loss of a loved one can also lead to reduced socialising, exercising or eating well. These unhealthy habits are all exacerbated by depression caused by loneliness, and with 32% of people aged over 65 living alone in Europe, this is a significant challenge to overcome.

‘If you are able to decrease the correlation between depression and age, so keeping people engaged with life in general and social activities, these people aren’t going to visit the doctor as much,’ said Prof. Torres, who is also the coordinator of EMPATHIC, a project that has developed a virtual coach to help assist elderly people to live independently.

The coach will be available on smart devices such as phones or tablets and is focused on engaging with someone to help them keep up the healthier habits they may have had in the past.

‘The main goal is that the user reflects a little bit and then they can agree to try something,’ said Prof. Torres.

For instance, the coach may ask users if they would like go to the local market to prepare their favourite dinner and then turn it into a social activity by inviting a friend to come along. This type of approach addresses the three key areas that cause older people’s health to deteriorate, said Prof. Torres, which are poor nutrition, physical activity and loneliness.

For the coach to be effective the researchers have to build a personal profile for each user, as every person is different and requires specific suggestions relevant to them. They do this by building a database for each person over time and combining it with dialogue designed around the user’s culture.

The researchers are testing the virtual coach on smart devices with 250 older people in three areas – Spain, France and Norway – who are already providing feedback on what works and what doesn’t, which will increase the chances of the virtual coach actually being used.

By the end of the project in 2020, the researchers hope to have a prototype ready for the market, but Prof. Torres insisted that it will not replace healthcare professionals. Instead she sees the smart coach as another tool to help older people live a more independent life – and in doing so reduce the pressure on public healthcare systems.

The research in this article was funded by the EU. If you liked this article, please consider sharing it on social media.

Jillian Ogle is the first ‘Roboticist in Residence’ at Co-Lab

Currently also featured on the cover of MAKE magazine, Jillian Ogle is a robot builder, game designer and the founder of Let’s Robot a live streaming interactive robotics community, where users can control real robots via chatroom commands, or put their on own robots online. Some users can even make money with their robots on the Let’s Robot platform which allows viewers to make micropayments to access some robots. All you need is a robot doing something that’s interesting to someone else, whether it’s visiting new locations or letting the internet shoot ping pong balls at you while you work!

As the first ‘Roboticist in Residence’ at Co-Lab in Oakland, Ogle has access to the all the equipment and 32,000 sq ft of space, providing her robotics community with a super large robot playground for her live broadcasts. And the company of fellow robot geeks. Co-Lab is the new coworking space at Circuit Launch supported by Silicon Valley Robotics, and provides mentors, advisors and community events, as well as electronics and robotics prototyping equipment.

You can meet Ogle at the next Silicon Valley Robotics speaker salon “The Future of Robotics is FUN” on Sept 4 2018. She’ll be joined by Circuit Launch COO Dan O’Mara and Mike Winter, Battlebot World Champion and founder of new competition ‘AI or Die’. Small and cheap phone powered robots are becoming incredibly intelligent and Ogle and Winter are at the forefront of pushing the envelope.

Ogle sees Let’s Robot as the start of a new type of entertainment, where the relationship between viewers and content are two-way and interactive. Particularly because robots can go places that some of us can’t, like the Oscars. Ogle has ironed out a lot of the problems with telepresence robotics including faster response time for two way commands. Plus it’s more entertaining than old school telepresence with robots able to take a range of actions in the real world.

While the majority of robots are still small and toylike, Ogle believes that this is just the beginning of the way we’ll interact with robots in the future. Interaction is Ogle’s strength, she started her career as an interactive and game designer, previously working at companies like Disney and was also a participant in Intel’s Software Innovators program.

“I started all this by building dungeons out of cardboard and foam in my living room. My background was in game design, so I’m like, ‘Let’s make it a game.’ There’s definitely a narrative angle you could push; there’s also the real-world exploration angle. But I started to realize it’s a little bigger than that, right? With this project, you can give people access to things they couldn’t access by themselves.” said Jillian talking to Motherboard.

Here are the instructions from Makezine for connecting your own robot to Let’s Robot. The robot side software is open source, and runs on most Linux-based computers. There is even  an API that allows you to fully customize the experience. If you’re building your own, start here.

Most of the homebrew robots on Let’s Robot use the following components:

  • Raspberry Pi or other single-board computer. The newest Raspberry Pi has onboard Wi-Fi, you just need to point it at your access point.
  • SD card with Raspbian or NOOBS installed. You can follow our guide to get our software to run on your robot, and pair it with the site: letsrobot.tv/setup.
  • Microcontroller, such as Arduino. The Adafruit motor hat is also popular.
  • Camera to see
  • Microphone to hear
  • Speaker to let the robot talk
  • Body to hold all the parts
  • Motors and servos to move and drive around
  • LEDs and sensors to make things interesting
  • And a battery to power it all

A lot of devices and robots are already supported by Let’s Robot software, including the GoPiGo Robot, and Anki Cozmo. If you have an awesome robot just sitting on the shelf collecting some dust, this could be a great way to share it with everyone! There’s also a development kit called “Telly Bot” which works out of the box with the letsrobot.tv site. See you online!

 

 

DelFly Nimble mimics the high-speed escape of fruit flies

DelFly Nimble in forward flight. Credits: TU Delft

Bio-inspired flapping wing robots hold a great potential. The promise is that they can fly very efficiently even at smaller scales, while being able to fly fast, hover, and make quick maneuvers. We now present a new flapping wing robot, the DelFly Nimble, that is so agile that it can even accurately mimic the high-speed escape maneuvers of fruit flies. In the scientific article, published in Science, we show that the robot’s motion resembles so much that of the fruit fly that it allowed us to better understand the dynamics of fruit flies during escape maneuvers. Here at Robohub, we wish to give a bit more background about the motivation and process of how we got to the final design of this robot, and what we think the future may bring.

At TU Delft, we have been working since 2005 on flapping wing robots. The first one, the DelFly I, was made by a group of students, guided by the lab staff. Already this first model, which had a wingspan of 50 cm and a weight of 21 grams, was designed to carry a camera for remote inspection. In fact, one of the defining aspects of the DelFly, the X-wing configuration, was chosen so that a camera attached to the body would not vibrate too much during flight. This configuration also really worked out well for the efficiency of flight, as the touching and parting of the wings (termed “clap-and-fling”) significantly increased lift production. DelFly I could be tele-piloted with First Person View to fly into a room, look around, and fly out again.
Since the first DelFly design, we have been scaling the DelFly down (even down to the camera-equipped, 10cm wingspan and 3 gram DelFly Micro), but we have always kept a focus on keeping a fully operational platform with a camera (see video below).

Moreover, we have made the DelFly smarter (e.g., the fully autonomous 20-gram DelFly Explorer in the video below), so that it can fly around in buildings if there is no video link to a human pilot.

However, when flying all these DelFly models in realistic real-world environments, we were often faced with the challenges of wind and wind gusts. This definitely is a challenge when flying outdoors, or from outdoors to indoors, but even a strong air conditioning could push the DelFly off its course, enough to cause trouble. The problem here is the fact that all these DelFly models steered by means of an airplane-like tail, with a rudder and elevator. When flying fast, the air would go very fast over these surfaces, providing quite some control authority, but when flying slower, most of the air would come from the flapping of the DelFly’s own wings, leading to less control authority.

The solution to this problem is to steer more like insects do: by changing the wing motion to introduce body rotations. This allows for much higher torques, and hence much more agile behavior. Removing the stabilizing tail does introduce the need for active attitude control. Steering with the wings was first used on a flapping wing robot by AeroVironment’s Nano Hummingbird, and – with a very different mechanical design – is also used on the tiny Robobee. For the DelFly design, we have made some different choices as we will explain below.

Looking at flying animals, there are many ways how the rotations around their three orthogonal body axes can be controlled. The wing motion patterns of flying insects show large differences among various species, and the changes in these patterns inducing the body rotations often involve many parameters. However, with robots, we are limited by the currently available actuators, and the current level of technology in general. We were thus searching for a solution that would: 1) require minimal number of actuators to generate the three torques inducing the respective body rotations, and the lift force keeping the robot airborne, 2) be able to generate these torques independently, ideally by mechanically decoupled mechanisms, 3) and that would be as simple as possible, to keep the system lightweight. Moreover, we wanted to reuse as much as possible of our previous, reliable and efficient, DelFly designs.

DelFly Explorer with tail and ailerons. Credits: TU Delft

The new, tailless DelFly Nimble. Credits: TU Delft

We landed on a concept, which reuses two of the flapping wing mechanisms, each with two 14-cm wings installed only on one side. The two mechanisms, one of each side of the robot, can provide the thrust force but also roll moments, if one of the two wing pairs is driven at a higher frequency, and thus producing higher thrust force, than the other. These two mechanisms are not rigidly fixed to the body, but their relative orientation can be controlled by a simple, 3D-printed servo-actuator-driven gear mechanism. This allows to shift the flapping wings (and their thrust forces) more in front of, or behind the body, which results in a pitch moment, making the body tilt forward or backward. Finally, we have added a third servo actuator that deflects the bottoms of the wing roots asymmetrically; this tilts the thrust vectors clockwise or counter clockwise (when viewed from the top), and results in a yaw torque, making the body turn around its axis.
Compared to previous tailless designs, this solution allows production with available and established manufacturing technologies, and relatively easy assembly. Moreover, the effectiveness of the control mechanisms is very high, allowing highly agile maneuvers. Importantly, this effectiveness is rather insensitive to the position of the center of gravity, which can shift when a battery is changed or when a camera is added to the system. Thanks to this insensitivity we can even play with the position of the center of gravity, either making the DelFly more agile or more stable in flight, without losing control authority. Finally, for high-speed forward flight, the DelFly has to move its wings backwards to pitch forward; this also introduces a positive dihedral effect that helps with the stabilization in this fast flight mode.

We think that the DelFly Nimble is a great starting point to start exploring real-world applications, such as flying in greenhouses to monitor the crop or flying in warehouses to scan the stock. It can hover for over 5 minutes and when flying fast forward it has a range of a kilometer. With its onboard computer – an open source 2.8 gram Lisa S autopilot that is used for actively stabilizing the robot – we can make it execute pre-programmed behaviors such as flips or rapid turns. Towards the future, we want to investigate ways to make it smaller and smarter, but we promise not to teach it how to lay eggs on the fruit in your kitchen.

Research team
Matěj Karásek (1)
Florian Muijres (2)
Christophe De Wagter (1)
Bart Remes (1)
Guido de Croon (1)
(1) Micro Air Vehicle Laboratory, Control and Simulation, Delft University of Technology, Delft, Netherlands.
(2) Experimental Zoology Group, Wageningen University and Research, Wageningen, Netherlands.

Culturally competent robots – the future in elderly care

Future robots will assist the elderly while adapting to the culture of the individual they are caring for. The first of this type of robots are now being tested in retirement homes within the scope of "Caresses," an interdisciplinary project where AI researchers from Örebro University are participating.
Page 280 of 341
1 278 279 280 281 282 341