Flowstate: Intrinsic’s app to simplify the creation of robotics applications

Copyright by Intrinsic.

Finally, Intrinsic (a spin-off of Google-X) has revealed the product they have been working with the help of the Open Source Robotics Corporation team (among others): Flowstate!

What is Flowstate?

Introducing Intrinsic Flowstate | Intrinsic (image copyright by Intrinsic)

Flowstate is a web-based software designed to simplify the creation of software applications for industrial robots. The application provides a user-friendly desktop environment where blocks can be combined to define the desired behavior of an industrial robot for specific tasks.

Good points

  • Flowstate offers a range of features, including simulation testing, debugging tools, and seamless deployment to real robots.
  • It is based on ROS, so we should be able to use our favorite framework and all the existing software to program on it, including Gazebo simulations.
  • It has a behavior tree based system to graphically control the flow of the program, which simplifies the way to create programs by just moving blocks around. But it is also possible to switch to expert mode to manually touch the code.
  • It has a library of already existing robot models and hardware ready to be added, but you can also add your own.
  • Additionally, the application provides pre-built AI skills that can be utilized as modules to achieve complex AI results without the need for manual coding.
  • One limitiation (but I actually consider a good point) is that the tool is thought for industrial robots not for service robots in general. This is good because it provides a focus for the product, specially for this initial release

Flowstate | Intrinsic (image copyright by Intrinsic)

Based on the official post and the keynote released on Monday, May 15, 2023 (available here), this is the information we have gathered so far. However, we currently lack a comprehensive understanding of how the software works, its complete feature set, and any potential limitations. To gain more insights, we must wait until July of this year, hoping that I will be among the lucky participants selected for the private beta (open call to the beta still available here).

Unclear points

Even if I find interesting the proposal of Intrinsic, I have identified three potential concerns regarding it:

  1. Interoperability across different hardware and software platforms poses a challenge. The recruitment of the full OSRC team by Intrinsic appears to address this issue, given that ROS is currently the closest system in the market to achieve such interoperability. However, widespread adoption of ROS by industrial robot manufacturers is still limited, with only a few companies embracing it.

    Ensuring hardware interoperability necessitates the adoption of a common framework by robot manufacturers, which is currently a distant reality. What we, ROS developers, aim right now is to be able to have somebody build the ROS drivers for the robotic arm we want to use (like for example the manufacturers of the robot, or the team of ROS Industrial). However, manufacturers generally hesitate to develop ROS drivers due to potential business limitations and their aims for customer lock-in. Unless a platform dedicates substantial resources to developing and maintaining drivers for supported robots, the challenge of hardware interoperability cannot be solved by a platform alone (actually, that is one of the goals that ROS-Industrial is trying to achieve).

    Google possesses the potential to unite hardware companies towards this goal, as Wendy Tan White, the CEO of Intrinsic mentioned, “This is an ecosystem effort” However, it is crucial for the industrial community to perceive tangible benefits and value in supporting this initiative beyond merely assisting others in building their businesses. The specific benefits that the ecosystem stands to gain by supporting this initiative remain unclear.

  2. Flowstate | Intrinsic (image copyright by Intrinsic)

  3. The availability of pre-made AI skills for robots is a complex task. Consider the widely used skills in ROS, such as navigation or arm path planning, exemplified by Nav2 and MoveIt, which offer excellent functionality. However, integrating these skills into new robots is not as simple as plug-and-play. In fact, dedicated courses exist to teach users how to effectively utilize the different components of navigation within a robot. This highlights the challenges associated with implementing such skills for robots in general. Thus, it is reasonable to anticipate similar difficulties in developing pre-made skills within Flowstate.
  4. A final point that I don’t see clear (because it was not addressed in the presentation) is how the company is going to do business with Flowstate. This is a very important point for every robotics developer because we don’t want to be locked into proprietary systems. We understand that companies must have a business, but we want to understand clearly what the business is so we can decide if that is convenient or not for us, both in the short and the long run. For instance, Robomaker from Amazon did not gain much traction because forced the developers to pay for the cloud while running Robomaker, when they could do the same thing (with less fancy stuff) in their own local computers for free

Conclusion

Overall, while Flowstate shows promising, further information and hands-on experience are required to assess its effectiveness and address potential challenges.

I have applied to the restricted beta. I hope to be selected so I can have a first hand experience and report about it.

Please make sure to read the original post by Wendy Tan White and the keynote presentation, both can be found at the web of Intrinsic.

Flowstate | Intrinsic (image copyright by Intrinsic)

ROS Awards 2022 results

The ROS Awards are the Oscars of the ROS world! The intention of these awards is to express recognition for contributions to the ROS community and the development of the ROS-based robot industry, and to help those contributions gain awareness.

Conditions

  • Selection of the winners is made by anonymous online voting over a period of 2 weeks
  • Anybody in the ROS community can vote through the voting enabled website
  • Organizers of the awards provide an initial list of 10 possible projects for each category but the list can be increased at any time by anybody during the voting period
  • Since the Awards are organized by The Construct none of its products or developers can be voted
  • Winners are announced at the ROS Developers Day yearly conference
  • New on 2022 edition: Winners of previous editions cannot win again, in order to not concentrate the focus on the same projects all the time. Remember, with these awards, we want to help spread all ROS projects!

Voting

  1. Every person can only vote once in each category
  2. You cannot change your answers once you have submitted your vote
  3. Voting is closed 3 days before the conference, and a list of the finalists per each category is announced in the same week
  4. Voters cannot use flaws in the system to influence voting. Any detection of trying to trick the system will disqualify the votes. You can, though, promote your favorite among your networks so others vote for it.

Measures have been taken to prevent as much as possible batch voting from a single person.

Categories

Best ROS Software

The Best ROS Software category comprises any software that runs with ROS. It can be a package published on the ROS.org repo of just a software that uses ROS libraries to produce an input. Open Source and closed source are both valid.

Finalists

  1. Ignition Gazebo, by Open Robotics
  2. Groot Behavior Tree, by Davide Faconti
  3. Webots, by Cyberbotics
  4. SMACC2, by Brett Aldrich
  5. ros2_control, by several ROS developers
  6. PlotJuggler, by Davide Faconti

Winner: Webots, by Cyberbotics

Learn more about the winner in this video:

Best ROS-Based Robot

The Best ROS-Based Robot category includes any robot that runs ROS inside it. They can be robotics products, robots for research, or robots for education. In all cases, they must be running ROS inside.

Finalists

  1. Panda robot arm, by Franka Emika
  2. TIAGo, by Pal Robotics
  3. UR robot arm, by Universal Robots
  4. Turtlebot 4, by Clearpath
  5. Nanosaur, by Raffaello Bonghi
  6. Leo Rover, by Leo Rover

Winner: Nanosaur, by Raffaello Bonghi

Learn more about the winner in this video:

Best ROS Developer

Developers are the ones that create all the ROS software that we love. The Best ROS Developer category allows you to vote for any developer who has contributed to ROS development in one sense or another.

Finalists

  1. Francisco Martín
  2. Davide Faconti
  3. Raffaello Bonghi
  4. Brett Aldrich
  5. Victor Mayoral Vilches
  6. Pradheep Krishna

Winner: Francisco Martín

Learn more about the winner in this video:

Insights from the 2022 Edition

  1. This year, the third time we organise the awards, we have increased the total number of votes by 500% So we can say that the winners are a good representation of the feelings of the community.
  2. Still this year, the winners of previous editions received many votes. Fortunately, we applied the new rule of not allowing to win previous winners, to provide space for other ROS projects have the focus on the community, and hence help to create a rich ROS ecosystem.

Conclusions

The ROS Awards started in 2020 with a first edition where the winners were some of the best and well-known projects in the ROS world. In this third edition, we have massively increased the number of votes from the previous edition. We expect this award will continue to contribute to the spreading of good ROS projects.

See you again at ROS Awards 2023!

Exploring ROS2 with wheeled robot – #1 – Launch ROS2 Simulation

By Marco Arruda

This is the 1st chapter of the series “Exploring ROS2 with a wheeled robot”. In this episode, we setup our first ROS2 simulation using Gazebo 11. From cloning, compiling and creating a package + launch file to start the simulation!

You’ll learn:

  • How to Launch a simulation using ROS2
  • How to Compile ROS2 packages
  • How to Create launch files with ROS2

1 – Start the environment

In this series we are using ROS2 foxy, go to this page, create a new rosject selecting ROS2 Foxy distro and and run it.

2 – Clone and compile the simulation

The first step is to clone the dolly robot package. Open a web shell and execute the following:

cd ~/ros2_ws/src/
git clone https://github.com/chapulina/dolly.git

Source the ROS 2 installation folder and compile the workspace:

source /opt/ros/foxy/setup.bash
cd ~/ros2_ws
colcon build --symlink-install --packages-ignore dolly_ignition

Notice we are ignoring the ignition related package, that’s because we will work only with gazebo simulator.

3 – Create a new package and launch file

In order to launch the simulation, we will create the launch file from the scratch. It goes like:

cd ~/ros2_ws/src
ros2 pkg create my_package --build-type ament_cmake --dependencies rclcpp

After that, you must have the new folder my_package in your workspace. Create a new folder to contain launch files and the new launch file as well:

mkdir -p ~/ros2_ws/src/my_package/launch
touch ~/ros2_ws/src/my_package/launch/dolly.launch.py

Copy and paste the following to the new launch file:

import os

from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource

def generate_launch_description():

pkg_gazebo_ros = get_package_share_directory('gazebo_ros')
pkg_dolly_gazebo = get_package_share_directory('dolly_gazebo')

gazebo = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(pkg_gazebo_ros, 'launch', 'gazebo.launch.py')
)
)

return LaunchDescription([
DeclareLaunchArgument(
'world',
default_value=[os.path.join(pkg_dolly_gazebo, 'worlds', 'dolly_empty.world'), ''],
description='SDF world file',
),
gazebo
])

Notice that a launch file returns a LaunchDescription that contains nodes or other launch files.

In this case, we have just included another launch file gazebo.launch.py and changed one of its arguments, the one that stands for the world name: world.

The robot, in that case, is included in the world file, so there is no need to have an extra spawn node, for example.

And append to the end of the file ~/ros2_ws/src/my_package/CMakeLists.txt the following instruction to install the new launch file into the ROS 2 environment:

install(DIRECTORY
launch
DESTINATION share/${PROJECT_NAME}/
)

ament_package()

4 – Compile and launch the simulation

Use the command below to compile only the created package:

cd ~/ros2_ws/
colcon build --symlink-install --packages-select my_package
source ~/ros2_ws/install/setup.bash
ros2 launch my_package dolly.launch.py

5 – Conclusion

This is how you can launch a simulation in ROS2. It is important to notice that:

  • We are using a pre-made simulation: world + robot
  • This is how a launch file is created: A python script
  • In ROS2, you still have the same freedom of including other files or running executables inside a custom launch file

Related courses & extra links:

How to build a robotics startup: getting some money to start

This episode is about learning the options you have to get some money to start your startup and what is expected you achieve with that money.

In this podcast series of episodes we are going to explain how to create a robotics startup step by step.

We are going to learn how to select your co-founders, your team, how to look for investors, how to test your ideas, how to get customers, how to reach your market, how to build your product… Starting from zero, how to build a successful robotics startup.

I’m Ricardo Tellez, CEO and co-founder of The Construct startup, a robotics startup at which we deliver the best learning experience to become a ROS Developer, that is, to learn how to program robots with ROS.

Our company is already 5 years long, we are a team of 10 people working around the world. We have more than 100.000 students, and tens of Universities around the world use our online academy to provide the teaching environment to their students.

We have bootstrapped our startup, but we also (unsuccessfully) tried getting investors. We have done a few pivots and finally ended at the point that we are right now.

With all this experience, I’m going to teach you how to build your own startup. And we are going to go through the process by creating ourselves another startup, so you can see in the path how to create your own. So you are going to witness the creation of such robotics startup.

Subscribe to the podcast using any of the following methods

Or watch the video

The post 94. How to build a robotics startup: getting some money to start appeared first on The Construct.

How to build a robotics startup: getting the team right

This episode is about understanding why you can’t build your startup alone, and some criteria to properly select your co-founders.

In this podcast series of episodes we are going to explain how to create a robotics startup step by step.

We are going to learn how to select your co-founders, your team, how to look for investors, how to test your ideas, how to get customers, how to reach your market, how to build your product… Starting from zero, how to build a successful robotics startup.

I’m Ricardo Tellez, CEO and co-founder of The Construct startup, a robotics startup at which we deliver the best learning experience to become a ROS Developer, that is, to learn how to program robots with ROS.

Our company is already 5 years long, we are a team of 10 people working around the world. We have more than 100.000 students, and tens of Universities around the world use our online academy to provide the teaching environment to their students.

We have bootstrapped our startup, but we also (unsuccessfully) tried getting investors. We have done a few pivots and finally ended at the point that we are right now.

With all this experience, I’m going to teach you how to build your own startup. And we are going to go through the process by creating ourselves another startup, so you can see in the path how to create your own. So you are going to witness the creation of such robotics startup.

Subscribe to the podcast using any of the following methods

Or watch the video

The post 91. How to build a robotics startup: getting the team right appeared first on The Construct.

How to build a robotics startup: the product idea

In this podcast series of episodes we are going to explain how to create a robotics startup step by step.

We are going to learn how to select your co-founders, your team, how to look for investors, how to test your ideas, how to get customers, how to reach your market, how to build your product… Starting from zero, how to build a successful robotics startup.

I’m Ricardo Tellez, CEO and co-founder of The Construct startup, a robotics startup at which we deliver the best learning experience to become a ROS Developer, that is, to learn how to program robots with ROS.

Our company is already 5 years long, we are a team of 10 people working around the world. We have more than 100.000 students, and tens of Universities around the world use our online academy to provide the teaching environment to their students.

We have bootstrapped our startup, but we also (unsuccessfully) tried getting investors. We have done a few pivots and finally ended at the point that we are right now.

With all this experience, I’m going to teach you how to build your own startup. And we are going to go through the process by creating ourselves another startup, so you can see in the path how to create your own. So you are going to witness the creation of such robotics startup.

This episode is about deciding the product your startup will produce.

Related links

Subscribe to the podcast using any of the following methods

Or watch the video

The post 89. How to build a robotics startup: the product idea appeared first on The Construct.

Teaching robotics with ROS

A couple of months ago I interviewed Joel Esposito about the state of robotics education for the ROS Developers Podcast #21. On that podcast, Joel talks about his research on how robotics is taught around the world. He identifies a set of common robotics subjects that need to be explained in order to make students know about robotics, and a list of resources that people are using to teach them. But most important, he points out the importance of practicing with robots what students learn.

From my point of view, robotics is about doing, not about reading. A robotics class cannot be about learning how to compute the Jacobian of a matrix to find the inverse kinematics. Computing the Jacobian has nothing to do with robotics, it is just a mathematical tool that we use to solve a robotics problem. That is why a robotics class should not be focused on how to compute the Jacobian but on how to move the end effector to a given location.

  • I want to move this robotic arm’s end effector to that point, how do I do it?
  • Well, you need to represent the current configuration of the arm in a matrix!
  • Ah, ok, how do I do that?
  • You need to use a matrix to encode its current state. Let’s do it. Take that simulated robot arm and create a ROS node that subscribes to the /joint_states and captures the joint values. From those values, create a matrix storing it. It has to be able to modify the matrix at any time step, so if I move the arm, the matrix must change.
  • […]
  • Done! So what do I have to do now if I want to move the robot gripper close to this bottle?
  • You need to compute the Jacobian. Let’s do it! Create a program that manually introduce a desired position for the end effector. Then based on that data, compute the Jacobian.
  • How do I compute the Jacobian?
  • I love that you ask me that!

I understand that learning and studying theory is a big part of robotics, but not the biggest. The biggest should be practicing and doing with robots. For that, I propose to use ROS as the base system, the enabler, that allows practical practice (does that exist?!?!) while learning robotics.

The proposal

Teaching subjects as complex as inverse kinematics or robot navigation should not be (just) a theoretical matter. Practice should be embedded into the teaching of those complex robotics subjects.

I propose to teach ROS at the same time that we teach robotics, and use the former during the whole robotics semester as a tool to build and implement the robotics subject we are teaching. The idea is that we use ROS to allow the student to actually practice what they are learning. For instance, if we are talking about the different algorithms for obstacle avoidance, we also provide a simulated robot and make the student create a ROS program that actually implements the algorithm for that robot. By following this approach, the learning of the student is not only theoretical but instead includes the practice of what is being told.

The advantage of using ROS in this procedure is that ROS already provides a lot of material that can provide a working infrastructure, where you as a teacher, can concentrate on teaching the student how to create the small part that is required for the specific subject you are teaching.

We have so many tools at present that were not available just 5 years ago… Let’s make use of them to increase the student quality and quantity of learning! Instead of using Powerpoint slides, let’s use interactive Python notebooks. Instead of using a single robot for the whole class, let’s provide a robot simulation to each student. Instead of providing an equation on the screen, let’s provide students the implementation of it in a ROS node for them to modify.

Students learn robotics with ROS

Students learn robotics with ROS

Not a new method

What I preach is what I practice. This is the method that I am using on my class of Robot Navigation at the University of LaSalle in Barcelona. In this class, I teach the basics about how to make a robot autonomously move from one point in space to another while avoiding obstacles. You know, SLAM, particle filters, Dynamic Window Approaches and the like. As part of the class, students must learn ROS and use it to implement some of the theoretical concepts I explain to them, like for example, how to compute $odometry based on the values provided by the encoders.

However,  I’m not the only one using this method to teach robotics. For example, professor Ross Knepper from Cornell University explained to me on Using ROS to teach the foundations of robotics for the ROS Developers podcast #23, how he teaching of ROS and robotics in parallel. He even goes further, forbidding students to use many ROS software like the ROS navigation stack or MoveIt! His point is very good: he wants the students to actually learn how to do it, not just how to use something that somebody else has done (like the navigation stack). But he uses the ROS infrastructure anyway.

How to teach robotics with ROS

The method would involve the following steps:

  1. You start with a ROS class that explains the most basic topics of ROS, just what is required to start working
  2. Then you start explaining the robotics subject of your choice, making the students implement what you are teaching in a ROS program, using robots. I advise to use simulated robots.
  3. Whenever a new ROS concept is required for continuing the implementation of some algorithm, then, a class dedicated to that subject is performed.
  4. Continue with the next robotics subject
  5. Add a real robot project where students must apply all that they have learned into a single project with a real ROS based robot
  6. Add an exam

1. Explain the basic ROS concepts

For this step, you should describe the basic subjects of ROS that would allow a student to create ROS programs. Those subjects include the following:

  • ROS packages
  • Launching ROS nodes
  • Basic ROS line commands
  • ROS topic subscribers
  • ROS topic publishers
  • Rviz for visualization and debugging

It is not necessary to dedicate too much time to those subjects, just the necessary time for them to know about ROS. Deeper knowledge and understanding of ROS will be learned by practicing along the different classes of the semester. While they are trying to create the software for the implementation of the theoretical concepts, they will have to practice those concepts, and ingrain them deeper into their brain.

Very important, explain those concepts by using simulated robots and making the students apply the concepts to the simulated robot. For example, make the students read from a topic, to get the distance to the closest obstacle. Or make them write to a topic to make the robot move around. Do not just explain what a topic is and provide a code on the slide. Make them actually connect to a producer of data, that is, the simulated robot.

I would also recommend that at this step you forget teaching about creating custom messages, or what are services or action clients. That is too much for such an introductory class and it is very unlikely you are going to need it in your next robotics class.

2. Explain and implement robotics subject

You can now proceed on teaching your robotics subject. How I recommend is to divide the time of the class into two:

  1. First part will be to explain the theory of the robotics subject
  2. Second part to actually implement that theory. Create a program that actually does what you explained.

It may happen that in order to implement that theoretical part, the student needs a lot of pre-made code that supports the specific point you are teaching. That will mean that you have to prepare your class even harder and provide to the student will all that support code. The good news is that, by using ROS, it is very likely that you will find that code already done by someone. Hence, find or develop that code, and provide it as ROS package to your students. More about where to find the code and how to provide it to your students, below.

An interesting point to include here is what professor Knepper indicates: he creates some exercises with deliberate errors, so the students learn to recognize situations where the system is not performing correctly, and, more important, to create the skills necessary to figure out how to solve those errors. Take that point also into consideration.

Teaching Domain Randomization Reinforcement Learning with ROS robots

3. Add new ROS concept

At some point in time, you may need to create a custom ROS message for the robotics subject you are teaching. Or you may need to use a ROS service because you need to provide a service for face recognition. Whatever you need to explain about ROS, now is the best moment. Use that necessity to explain the concept. What I mean is that explaining a concept like ROS action servers when the student has no idea what this could be used for is a bad idea. The optimal moment to explain that new ROS concept is when it is so clear for everyone that the concept is needed.

I have found myself that explaining complex concepts like ROS services when the students don’t need them makes it very difficult for them to understand, even if you provide good examples of usage. They cannot feel the pain of not using those concepts in their programs. Only when they are in the situation that actually requires the concept, only then are they going to feel the pain, and the knowledge is going to be integrated into their heads.

4. Continue with the next robotics subject

Keep pushing this way. Move to the next subject. Provide a different robot simulation. Request the students implement the concept. And keep going this way until you finish the whole course.

5. Real robots project

Testing with simulators is good because it provides real life-like experience. However, the real test is when you use a real robot. Usually, Universities do not have a budget for a robot for each student, that is why we have been promoting so much the use of simulations. However, some real robot experience is necessary, and most universities can afford to have a few robots for the whole class. That is the point where the robotics project come into action.

Define a robotics project that encapsulates all the knowledge of the whole course. Make the students create groups that will test their results on the real robot. This approach has many advantages: students will have to summarize all the lessons into a single application. They will have to make it work in a real robot. Additionally, they will have to work in teams.

In order to provide this step to the students you will need to prepare the following:

  • A simulation of the environment where the real robot will be executed. This allows the students to practice most of the code in the simulator, in a faster way. It also allows you to have some rest (otherwise the students may require your presence all the time at the real robots lab 😉
  • You cannot escape providing some hours per week for the students to practice with the real robot. So schedule that.
robot students ROS

Finally, decide a day when you will evaluate the project for each team. That is going to be called demo day. On demo day, each group has to show how their code is able to make the robot perform as it is supposed to.

For example, at LaSalle, we use two Turtlebots for 15 students. The students must do teams of two people in order to do the project. Then, on demo day, their robot has to be able to serve the coffee on a real coffee shop in Barcelona (thank you Costa Coffee for your support!). All the students go to the coffee shop with their programs, and we bring the two robots. Then, while one team is demonstrating on one robot, the other is preparing.

Costa Coffee Gazebo ROS simulation

 

In case you need ROS certified robots, let me recommend you the online shop of my friends: ROS Components shop. That is where I buy robots or pieces when I need them. No commission for me recommending it! I just think they are great.

6. Evaluation

One very important point in the whole process is how to evaluate the students. In my opinion, the evaluation must be continuous, otherwise the students just do nothing until the previous day of the exam. Afterward, everyone complains.

At LaSalle, I do an exam of one hour every month, covering the topics I have taught during that month, and the previous months too.

In our case, the exams are completely practical. They have to make the robot perform something related to the subjects taught. For example, they have to make the robot create a map of the environment using an extended Kalman filter. I may provide the implementation of the extended Kalman filter for straight usage, but the student must know how to use it. How to capture the proper data from the robot, how to provide that to the filter, and how to use the output to build the map.

As an example, here you can find a ROSject containing the last exam I gave to the students about dead reckoning navigation. The ROSject contains everything they needed to do the exam, including instructions, scores, and robot simulations. That leads us to the next point.

How to provide the material to the students

If I have convinced you on using a practical method to teach robotics using ROS, at this point you must be concerned about two points:

  1. How am I going to create all that material?
  2. How can I provide my students with a running environment where to execute that bunch of simulations and ROS code?

Very good concerns.

Preparing the material is a lot of work. And more important, there is a lot of risk. When you prepare such material, there is a large probability that what you prepared does not work for the student’s computer. That is why I propose to use an online platform for preparing all that material, and share the material inside the same platform, so you will be 100% sure that it will work no matter who is going to use it.

I propose to use the ROS Development Studio. That is a platform that we have developed at our company, The Construct, for the easy creation, testing and distribution of ROS code.

The ROS Development Studio (or ROSDS for short), allows you to create the material using a web browser on any type of computer. It already provides the simulations you may need (even if you can add your own ones). It also provides a Python notebook structure that you can fill with the material for the students. It also allows you to include any code that your students may require in the form of ROS packages.

But the most interesting point of the ROSDS is that it allows you to share all your material by sending a simple web link to the students. That is what we call a ROSject. A ROSject is a complete ROS project that includes simulations, notebooks and code in a single web link. Whenever you provide this link to anybody, they will get a copy of the whole content, and they will be able to execute it in the exact same conditions as you created the ROSject.

This sharing feature makes it very easy also for the students to share with you their program for evaluation, in case of exams, or to help students when they are stuck while studying. Since the ROSject contains all the material, it will be very easy for you to correct the exam in the conditions the student created the program, without requiring you to copy files or set up your computer environment. Just tell the students to share with you the ROSject link.

We use the ROSjects at LaSalle to provide the different lessons and exercises. For example, we have a ROSject with the simulation of the coffee shop where the students will do the project evaluation. Also, we use the ROSject to create the exams (as you can see in the previous section).

Summarizing: ROSjects allow you to encapsulate your lessons in a complete unit that includes the explanations, with the simulations and with the code. And all that for free.

Still, you will have to create your ROSjects for your classes. This is something that is going to finish in the close future as more and more Universities are creating their ROSjects and publishing them online for anybody.

However, if you do not want to wait until they are created, I can recommend you our online academy the Robot Ignite Academy, where we provide already made ROS-based courses about deep learning, manipulation, robot perception, and more. The only drawback is that the academy has a certain cost per student. However, it is highly recommended because it simplifies your preparation. We use the Robot Ignite Academy at LaSalle, but many other Universities use it around the world, like Clarkson University (USA), University of Michigan (USA), Chukyo University (Japan), University of Sydney (Australia), University of Luxembourg (Luxembourg), Université de Reims (France) or University of Alicante (Spain).

Some examples of rosjects. ARIAC and Cartpole created by The Construct, Autorace created by ROBOTIS

Additional topics you may need to cover

Finally, I would like to make a point about a problem that I have identified when using this method.

I find that most of the people that come to our Robot Navigation class have zero knowledge about programming in either Python or C++, nor any knowledge on how to use Linux shell. After interviewing several robotics teachers around the world, I found that this is the case also for other countries.

If that is your case, first I would recommend that you reduce the scope of learning programming to the use of Python. Do not go to C++. C++ is too complex to start teaching it together with robotics. Also, Python is very powerful and easy to learn.

I usually create an initial class about Python and Linux shell. I do this class and immediately, on the next week, I do an exam to the students about Python and Linux shell. The purpose of the exam is to stress to the students the importance of mastering those programming skills. They are the base of the rest.

Here you can find a free online course for learning Python, and here, another one for learning Linux shell.

Additional problems that you may find is that the students have problems understanding English. Most of ROS documentation is in English. The way to communicate in the ROS community is in English (whether we like it or not). You may feel tempted to create the course notes in your mother language, and only provide documentary resources in your own language. I suggest not to do it. Push your students to learn English . The community needs a common language, and at present it is English.

Webinar about this subject

If you want to know more and discuss about the subject, let me suggest you come to the webinar we are doing on the 29th November about this matter.

You can check here more details about the How To Teach Autonomous Mobile Robotics and how to subscribe to attend.

Conclusion

I started this post talking about the findings of Joel Esposito. Even if you listen to the podcast interview, you will see that his final conclusion is actually NOT to use ROS to teach robotics. I’m sure there are other teachers with the same opinion. Other teachers like professor Knepper do advocate for the opposite. Those are points of views, like mine in this article. I recommend you listen to the podcast interview with Joel so you can understand why he suggests that.

It is up to you to decide. Now you have several opinions here. Which one is best for you? Please leave your answer into the comments so we can start a debate about it.

Simulations are the key to intelligent robots

I read an article entitled Games Hold the Key to Teaching Artificial Intelligent Systems, by Danny Vena, in which the author states that computer games like Minecraft, Civilization, and Grand Theft Auto have been used to train intelligent systems to perform better in visual learning, understand language, and collaborate with humans. The author concludes that games are going to be a key element in the field of artificial intelligence in the near future. And he is almost right.

In my opinion, the article only touches the surface of artificial intelligence by talking about games. Games have been a good starting point for the generation of intelligent systems that outperform humans, but going deeper into the realm of robots that are useful in human environments will require something more complex than games. And I’m talking about simulations.

Using games to bootstrap intelligence

The idea behind beating humans at games has been in artificial intelligence since its birth. Initially, researchers created programs to beat humans at Tic Tac Toe and Chess (like, for example, IBM’s DeepBlue). However, those games’ intelligence was programmed from scratch by human minds. There were people writing the code that decided which move should be the next one. However, that manual approach to generate intelligence reached a limit: intelligence is so complex that we realized that it may be too difficult to manually write a program that emulates it.

Then, a new idea was born: what if we create a system that learns by itself? In that case, the engineers will only have to program the learning structures and set the proper environment to allow intelligence to bootstrap by itself.

AlphaGo beats Lee Sedol

AlphaGo beats Lee Sedol
(photo credit: intheshortestrun)

The results of that idea are programs that learn to play the games better than anyone in the world, even if nobody explains to the program how to play in the first place. For example, Google’s DeepMind company created AlphaGo Zero program uses that approach. The program was able to beat the best players of Go in the world. The company used the same approach to create programs that learnt to play Atari games, starting from zero knowledge. Recently, OpenAI used this approach for their bot program that beats pro players of the Dota 2 game. By the way, if you want to reproduce the results of the Atari games, OpenAI released the OpenAI Gym, containing all the code to start training your system with Atari games, and compare the performance against other people.

What I took from those results is that the idea of making an intelligent system generate intelligence by itself is a good approach, and that the algorithms used for teaching can be used for making robots learn about their space (I’m not so optimistic about the way to encode the knowledge and to set the learning environment and stages, but that is another discussion).

From games to simulations

OpenAI wanted to go further. Instead of using games to generate programs that can play a game, they applied the same idea to make a robot do something useful: learn to manipulate a cube on its hand. In this case, instead of using a game, they used a simulation of the robot. The simulation was used to emulate the robot and its environment as if it were a real one. Then, they allowed the algorithm to control the simulated robot and make the robot learn about the task to solve by using domain randomization. After many trials, the simulated robot was able to manipulate the block in the expected way. But that was not all! At the end of the article, the authors successfully transferred the learned control program of the simulated robot to a real robot, which performed in a way similar to the simulated one. Except it was real.

Simulated Hand OpenAI Manipulation Experiments

Simulated Hand OpenAI Manipulation Experiment (image credit: OpenAI)

Real Hand OpenAI Manipulation Experiment

Real Hand OpenAI Manipulation Experiment (photo credit:OpenAI)

A similar approach was applied by OpenAI to a Fetch robot trained to grasp a spam box off of a table filled with different objects. The robot was trained in simulation and it was successfully transferred to the real robot.

OpenAI teaches Fetch robot to grasp from table using simulations (photo credit: OpenAI)

OpenAI teaches Fetch robot to grasp from table using simulations (photo credit: OpenAI)

We are getting close to the holy grail in robotics, a complete self-learning system!

Training robots

However, in their experiments, engineers from OpenAI discovered that training for robots is a lot more complex than training algorithms for games. Meanwhile, in games, the intelligent system has a very limited list of actions and perceptions available; robots face a huge and continuous spectrum in both domains, actions and perceptions. We can say that the options are infinite.

That increase in the number of options diminishes the usefulness of the algorithms used for RL. Usually, the way to deal with the problem is with some artificial tricks, like discarding some of the information completely or discretizing the data values artificially, reducing the options to only a few.

OpenAI engineers found that even if the robots were trained in simulations, their approach could not scale to more complex tasks.

Massive data vs. complex learning algorithms

As Andrew Ng indicated, and as an engineer from OpenAI personally indicated to me based on his results, massive data with simple learning algorithms wins over complicated algorithms with a small amount of data. This means that it is not a good idea to try to focus on getting more complex learning algorithms. Instead, the best approach for reaching intelligent robotics would be to use simple learning algorithms trained with massive amounts of data (which makes sense if we observe our own brain: a massive amount of neural networks trained over many years).

Google has always known that. Hence, in order to obtain massive amounts of data to train their robots, Google created a real life system with real robots, training all day long in a large space. Even if it is a clever idea, we can all see that this is not practical in any sense for any kind of robot and application (breaking robots, limited to execution in real time, a limited amount of robots, a limited amount of environments, and so on…).

Google robots training for grasping

Google robots training for grasping

That leads us to the same solution again: to use simulations. By using simulations, we can put any robot in any situation and train them there. Also, we can have virtually an infinite number of them training in parallel, and generate massive amounts of data in record time.

Even if that approach looks very clear right now, it was not three years ago when we created our company, The Construct, around robot simulations in the cloud. I remember exhibiting at the Innorobo 2015 exhibition and finding, after extensive interviews among all the other exhibitors, that only two among them were using simulations for their work. Furthermore, roboticists considered simulations to be something nasty to be avoided at all cost, since nothing can compare with the real robot (check here for a post I wrote about it at the time).

Thankfully, the situation has changed since then. Now, using simulations for training real robots is starting to become the way.

Transferring to real robots

We all know that it is one thing to get a solution with the simulation and another for that solution to work on the real robot. Having something done by the robot in the simulation doesn’t imply that it will work the same way on the real robot. Why is that?

Well, there is something called the reality gap. We can define the reality gap as the difference between the simulation of a situation and the real-life situation. Since it is impossible for us to simulate something to a perfect degree, there will always be differences between simulation and reality. If the difference is big enough, it may happen that the results obtained in the simulator are not relevant at all. That is, you have a big reality gap, and what applies in the simulation does not apply to the real world.

That problem of the reality gap is one of the main arguments used to discard simulators for robotics. And in my opinion, the path to follow is not to discard the simulators and find something else, but instead to find solutions to cross that reality gap. As for solutions, I believe we have two options:

1. Create more accurate simulators. That is on its way. We can see efforts in this direction. Some simulators concentrate on better physics (Mujoco); others on a closer look at reality (Unreal or Unity-based simulators, like Carla or AirSim). We can expect that as computer power continues to increase, and cloud systems become more accessible, the accuracy of simulations is going to keep increasing in both senses, physics and looks.

2. Build better ways to cross the reality gap. In its original work, Noise and the reality gap, Jakobi (the person who identified the problem of the reality gap) indicated that one of the first solutions is to make a simulation independent of the reality gap. His idea was to introduce noise in those variables that are not relevant to the task. The modern version of that noise introduction is the concept of domain randomization, as described in the paper Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World.

Domain randomization basically consists of performing the training of the robot in a simulated environment where its non-relevant-to-the-task features are changed randomly, like the colors of the elements, the light conditions, the relative position to other elements, etc.

The goal is to make the trained algorithm be unaffected by those elements in the scene that provide no information to the task at hand, but which may confuse it (because the algorithm doesn’t know which parts are the relevant ones to the task). I can see domain randomization as a way to tell the algorithm where to focus its attention, in terms of the huge flow of data that it is receiving.

Domain randomization applied by OpenAI to train a Fetch robot in simulation

Domain randomization applied by OpenAI to train a Fetch robot in simulation (photo credit: OpenAI)

In more recent works, the OpenAI team has released a very interesting paper that improves domain randomization. They introduce the concept of dynamics randomization. In this case, it is not the environment that is changing in the simulation, but the properties of the robot (like its mass, distance between grippers, etc.). The paper is Sim-to-real transfer of robotic control with dynamics randomization. That is the approach that OpenAI engineers took to successfully achieve the manipulation robot.

Some software that goes on the line

What follows is a list of software that allows the training of robots in simulations. I’m not including just robotics simulators (like Gazebo, Webots, and V-Rep) because they are just that, robot simulators in the general sense. The software listed here goes one step beyond that and provides a more complete solutions for doing the training in simulations. Of course, I have discarded the system used by OpenAI (which is Mujoco) because it requires the building of your own development environment.

Carla

Carla is an open source simulator for self-driving cars based on Unreal Engine. It has recently included a ROS bridge.

Carla simulator (photo credit Carla)

Carla simulator (photo credit Carla)

Microsoft Airsim

Microsoft Airsim drones simulator follows a similar approach to Carla, but for drones. Recently, they updated the simulator to also include self-driving cars.

Airsim

Airsim (photo credit: Microsoft)

Nvidia Isaac

Nvidia Isaac aims to be a complete solution for training robots on simulations and then transferring to real robots. There is still nothing available, but they are working on it.

Isaac

Isaac (photo credit: Nvidia)

ROS Development Studio

The ROS Development Studio is the development environment that our company created, and it has been conceived from the beginning to simulate and train any ROS-based robot, requiring nothing to be installed in your computer (cloud-based). Simulations for the robots are already provided with all the ROS controllers up and running, as well as the machine learning tools. It includes a system of Gym cloud computers for the parallel training of robots on an infinite number of computers.

ROS Development Studio showing an industrial environment

ROS Development Studio showing an industrial environment

Here is a video showing a simple example of training two cartpoles in parallel using the Gym computers inside the ROS Development Studio:

 

(Readers, if you know other software like this that I can add, let me know.)

Conclusion

Making all those deep neural networks learn in a training simulation is the way to go, and as we may see in the future, this is just the tip of the iceberg. My personal opinion is that intelligence is yet more embodied than current AI approaches admit: you cannot have intelligence without a body. Hence, I believe that the use of simulated embodiments will be even higher in the future. We’ll see.

How to start with self-driving cars using ROS

Self-driving cars are inevitable.

In recent years, self-driving cars have become a priority for automotive companies. BMW, Bosch, Google, Baidu, Toyota, GE, Tesla, Ford, Uber and Volvo are investing in autonomous driving research. Also, many new companies have appeared in the autonomous cars industry: Drive.ai, Cruise, nuTonomy, Waymo to name a few (read this post for a list of 260 companies involved in the self-driving industry).

The rapid development of this field has prompted a large demand for autonomous cars engineers. Among the skills required, knowing how to program with ROS is becoming an important one. You just have to visit the robotics-worldwide list to see the large amount of job offers for working/researching in autonomous cars, which demand knowledge of ROS.

Why ROS is interesting for autonomous cars

Robot Operating System (ROS) is a mature and flexible framework for robotics programming. ROS provides the required tools to easily access sensors data, process that data, and generate an appropriate response for the motors and other actuators of the robot. The whole ROS system has been designed to be fully distributed in terms of computation, so different computers can take part in the control processes, and act together as a single entity (the robot).

Due to these characteristics, ROS is a perfect tool for self-driving cars. After all, an autonomous vehicle can be considered as just another type of robot, so the same types of programs can be used to control them. ROS is interesting because:

1. There is a lot of code for autonomous cars already created. Autonomous cars require the creation of algorithms that are able to build a map, localize the robot using lidars or GPS, plan paths along maps, avoid obstacles, process pointclouds or cameras data to extract information, etc… Many algorithms designed for the navigation of wheeled robots are almost directly applicable to autonomous cars. Since those algorithms are already available in ROS, self-driving cars can just make use of them off-the-shelf.

2. Visualization tools are already available. ROS has created a suite of graphical tools that allow the easy recording and visualization of data captured by the sensors, and representation of the status of the vehicle in a comprehensive manner. Also, it provides a simple way to create additional visualizations required for particular needs. This is tremendously useful when developing the control software and trying to debug the code.

3. It is relatively simple to start an autonomous car project with ROS onboard. You can start right now with a simple wheeled robot equipped with a pair of wheels, a camera, a laser scanner, and the ROS navigation stack. You’re set up in a few hours. That could serve as a basis to understand how the whole thing works. Then you can move to more professional setups, like for example buying a car that is already prepared for autonomous car experiments, with full ROS support (like the Dataspeed Inc. Lincoln MKZ DBW kit).
Self-driving car companies have identified those advantages and have started to use ROS in their developments. Examples of companies using ROS include BMW (watch their presentation at ROSCON 2015), Bosch or nuTonomy.

Weak points of using ROS

ROS is not all nice and good. At present, ROS presents two important drawbacks for autonomous vehicles:

1. Single point of failure. All ROS applications rely on a software component called the roscore. That component, provided by ROS itself, is in charge of handling all coordination between the different parts of the ROS application. If the component fails, then the whole ROS system goes down. This implies that it does not matter how well your ROS application has been constructed. If roscore dies, your application dies.

2. ROS is not secure. The current version of ROS does not implement any security mechanism for preventing third parties from getting into the ROS network and reading the communication between nodes. This implies that anybody with access to the network of the car can get to the ROS messaging and kidnap the car behavior.

All those drawbacks are expected to be solved in the newest version of ROS, ROS 2. Open Robotics, the creators of ROS have recently released a second beta of ROS 2 which can be tested here. It is expected there will be a release version by the end of 2017.

In any case, we believe that the ROS-based path to self-driving vehicles is the way to go. That is why, we propose a low budget learning path for becoming a self-driving cars engineer, based on the ROS framework.

Our low cost solution to become a self-driving car engineer

Step 1
First thing you need is to learn ROS. ROS is quite a complex framework to learn and requires dedication and effort. Watch the following video for a list of the 5 best methods to learn ROS. Learning basic ROS will help you understand how to create programs with that framework, and how to reuse programs made by others.

Step 2
Next, you need to get familiar with the basic concepts of robot navigation with ROS. Learning how the ROS navigation stack works will provide you the knowledge of basic concepts in navigation like mapping, path planning or sensor fusion. There is no better way to learn this than taking the ROS Navigation in 5 days course developed by Robot Ignite Academy (disclaimer – this is provided by my company The Construct).

Step 3
Third step would be to learn the basic ROS application to autonomous cars: how to use the sensors available in any standard of autonomous car, how to navigate using a GPS, how to generate an algorithm for obstacle detection based on the sensors data, how to interface ROS with the Can-bus protocol used in all the cars used in the industry…

The following video tutorial is ideal to start learning ROS applied to Autonomous Vehicles from zero. The course teaches how to program a car with ROS for autonomous navigation by using an autonomous car simulation. The video is available for free, but if you want to get the most of it, we recommend you to do the exercises at the same time by enrolling into the Robot Ignite Academy.

Step 4
After the basic ROS for Autonomous Cars course, you should learn more advanced subjects like obstacles and traffic signals identification, road following, as well as coordination of vehicles in cross roads. For that purpose, our recommendation would be to use the Duckietown project at MIT. The project provides complete instructions to physically build a small size town, with lanes, traffic lights and traffic signals, to practice algorithms in the real world (even if at a small scale). It also provides instructions to build the autonomous cars that should populate the town. Cars are based on differential drives and a single camera for sensors. That is why they achieve a very low cost (around 100$ per each car).

Image by Duckietown project

Due to the low monetary requirements, and to the good experience it offers for testing real stuff, the Duckietown project is ideal to start practicing autonomous cars concepts like line following based on vision, detecting other cars, traffic signal-based behavior. Still, if your budget is below that cost, you can use a Gazebo simulation of the Duckietown, and still practice most of the content.

Step 5
Then if you really want to go pro, you need to practice with real life data. For that purpose we propose you install and learn from the Autoware project. This project provides real data obtained from real cars on real streets, by means of ROS bags. ROS bags are logs containing data captured from sensors which can be used in ROS programs as if the programs were connected to the real car. By using those bags, you will be able to test algorithms as if you had an autonomous car to practice with (the only limitation is that the data is always the same and restricted to the situation that happened when it was recorded).

Image by the Autoware project

The Autoware project is an amazing huge project that, apart from the ROS bags, provides multiple state-of-the-art algorithms for localization, mapping, obstacles detection and identification using deep learning. It is a little bit complex and huge, but definitely worth studying for a deeper understanding of ROS with autonomous vehicles. I recommend you to watch the Autoware ROSCON2017 presentation for an overview of the system (will be available in October 2017).

Step 6
Final step would be to start implementing your own ROS algorithms for autonomous cars and testing them in different, realistic situations. Previous steps provided you with real-life situations, the bags were limited to the situations where they were recorded. Now it is time to test your algorithms in different situations. You can use already existing algorithms in a mix of all the steps above, but at some point, you will see that all those implementations lack some things required for your goals. You will have to start developing your own algorithms, and you will need lots of tests. For this purpose, one of the best options is to use a Gazebo simulation of an autonomous car as a testbed for your ROS algorithms. Recently, Open Robotics has released a simulation of cars for Gazebo 8 simulator.

Image by Open Robotics

That simulation based on ROS contains a Prius car model, together with 16 beam lidar on the roof, 8 ultrasonic sensors, 4 cameras, and 2 planar lidar, which you can use to practice and create your own self-driving car algorithms. By using that simulation, you will be able to put the car in as many different situations as you want, checking if your algorithm works on those situations, and repeating as many times as you want until it works.

Conclusion

Autonomous driving is an exciting subject with demand for experienced engineers increasing year after year. ROS is one of the best options to quickly jump into the subject. So learning ROS for self-driving vehicles is becoming an important skill for engineers. We have presented here a full path to learn ROS for autonomous vehicles while keeping the budget low. Now it is your turn to make the effort and learn. Money is not an excuse anymore. Go for it!

The need for robotics standards

Last week I was talking to one lead engineer of a Singapore company which is building a benchmarking system for robot solutions. Having seen my presentation at ROSCON2016 about robot benchmarking, he asked me how I would benchmark solutions that are non-ROS compatible. I said that I wouldn’t dedicate time to benchmark solutions that are not ROS-based. Instead, I suggested I would use the time to polish the ROS-based benchmarking and suggest that vendors adopt that middleware in their products.

Benchmarks are necessary and they need standards

Benchmarks are necessary to improve any field. By having a benchmark, different solutions to a single problem can be compared and hence a direction for improvement can be traced. Currently, robotics lacks such benchmarking system.

I strongly believe that to create a benchmark for robotics we need a standard at the level of programming.

By having a standard at the level of programming, manufacturers can build their own hardware solutions at will, as long as they are programmable with the programming standard. That is the approach taken by devices that can be plugged into a computer. Manufacturers create the product on their own terms, and then provide a Windows driver that allows any computer in the world (that runs Windows) to communicate with the product. Once this computer-to-product communication is made, you can create programs that compare the same type of devices from different manufacturers for performance, quality, noise, whatever your benchmark is trying to compare.

You see? Different types of devices, different types of hardware. But all of them can be compared through the same benchmarking system that relies on the Windows standard.

Software development for robots also needs standards

The need for standards is not only required for comparing solutions but also to speed robotics development. By having a robotics standard, developers can concentrate on building solutions that do not have to be re-implemented whenever the robot hardware changes. Actually, given the middleware structure, developers can disassociate enough from the hardware that they can almost spend 100% of their time in the software realm, while still developing code for robots.

We need the same type of standard for robotics. We need a kind of operating system that allows us to compare different robotics solutions. We need the Windows of the PCs, the Android of the phones, the CAN of the buses…

IMG_0154

A few standard proposals and a winner

But you already know that. I’m not the first one to state this. Actually, many people have already tried to create such a standard. Some examples include Player, ROS, YARP, OROCOS, Urbi, MIRA or JdE Robot, to name a few.

Personally, I actually don’t care which standard is used. It could be ROS, it could be YARP, or it could be any other that still has not been created. The only thing I really care about is that we  adopt a standard as soon as possible.

And it looks like the developers have decided. Robotics developers prefer ROS as their common middleware to program robots.

No other middleware for robotics has had such a large adoption. Some data about it:

ROS YARP OROCOS
Number of Google pages: 243.000 37.000 42.000
Number of citations if the paper describing the middleware: 3.638 463 563
Alexa ranking: 14.118Screenshot from 2017-08-24 19:50:39 1.505.000Screenshot from 2017-08-24 19:50:29 668.293Screenshot from 2017-08-24 19:50:19

Note 1: Only showing the current big three players.

Note 2: Very simple comparison. Difficult to compare in other terms since data is not available.

Note 3: Data measured in August 2017. May vary at the time you are reading this. Links provided on the numbers themselves, so you can check yourself.

This is not only the feeling that we, roboticists, have. The numbers also indicate that ROS is becoming the standard for robotics programming.

Screenshot from 2017-08-24 19:25:59

Why ROS?

The question is then, why has ROS emerged on top of all the other possible contestants. None of them is worst than ROS in terms of features. Actually you can find some feature in all the other middlewares that outperform ROS. If that is so, why or how has ROS achieved the status of becoming the standard ?

A simple answer from my point of view: excellent learning tutorials and debugging tools.

1

 

Here there is a video where Leila Takayama, early developer of ROS, explains when she realized that the key for having ROS used worldwide would be to provide tools that simplify the reuse of ROS code. None of the other projects have such a set of clear and structured tutorials. In addition, few of the other middlewares provide debugging tools for their packages. The lack of these two essential aspects is preventing new people from using their middlewares (even if I understand the developers of OROCOS and YARP for not providing it… who wants to write tutorials or build debugging tools… nobody! ? )

 

Additionally, it is not only about tutorials and debugging tools. ROS creators also provide a good system of managing packages. The result is that developers worldwide could use others packages in a (relatively) easy way. This created an explosion in ROS packages available, providing off-the-shelf almost anything for your brand new ROSified robot.

Now, the impressive rate at which contributions to the ROS ecosystem are made makes it almost unstoppable in terms of growing.

Screenshot from 2017-02-23 20:39:27

 

What about companies?

At the beginning, ROS was mostly used by students at Universities. However, as ROS becomes more mature and the number of packages increases, companies are realizing that adopting ROS is also good for them because they will be able to use code developed by others. On top of that, it will be easier for them to hire new engineers who already know the middleware (otherwise they would need to teach the newcomers their own middleware).

As a result, many companies have jumped onto the ROS train, developing from scratch their products to be ROS compatible. Examples include Robotnik, Fetch Robotics, Savioke, Pal Robotics, Yujin Robots, The Construct, Rapyuta Robotics, Erle Robotics, Shadow Robot or Clearpath, to name a few of the sponsors of the next ROSCON ? . Creating their ROS-compatible products, they decreased their development time by several orders of magnitude.

To bring things further, two Spanish companies have revolutionised the standardization of robotics products using ROS middleware: on one side, Robotnik has created the ROS Components shop. A shop where anyone can buy ROS compatible devices, starting from mobile bases to sensors or actuators. On the other side, Erle Robotics (now Acutronic Robotics) is in the process of developing Hardware ROS. The H-ROS is a standardized software and hardware infrastructure to easily create reusable and reconfigurable robot hardware parts. ROS is enabling hardware standarization too, but this time driven by companies, not research! That must mean something…

Screenshot from 2017-08-24 22:25:45

Finally, it looks like industrial robot manufacturers have understood the value that a standard can provide to their business. Even if they do not make their industrial robots ROS-enabled from the start, they are adopting ROS Industrial  a flavour of ROS, which allows them to ROSify their industrial robots and re-use all the software created for manipulators in the ROS ecosystem.

But are all companies jumping onto the ROS train? Not all of them!

Some companies like Jibo, Aldebaran or Google still do not rely on ROS for their robot programming. Some of them rely on their own middleware created before the existence of ROS  (that is the case of Aldebaran). Some others, though, are creating their own middleware from scratch. Their reasons: they do not believe ROS is good, they have already created a middleware, or do not want to develop their products dependent on the middleware of others. Those companies have very fair reasons to go their way. However, will that make them competitive? (if we have to judge from history, mobiles, VCRs, the answer may be no).

So is ROS the standard for programming robots?

That question is still too soon to be answered. It looks like it is becoming the standard, but many things can change. It is unlikely that another middleware takes the current title from ROS, but it may happen. There could be a new player that wipes ROS from the map (maybe Google will release its middleware to the public – like they did with Android – and take the sector by storm?).

Still, ROS has its problems, like a lack of security or the instability of some important packages. Even if the OSRF group are working hard to build a better ROS system (for instance ROS2 is in beta phase with many root improvements), some hard work is still required for some basic things (like the ROS controllers for real robots).

IMG_3330

Given those numbers, at The Construct we believe that ROS IS THE STANDARD (that is why we are devoted to creating the best ROS learning tutorials of the world). Actually, it was thanks to this standardization that two Barcelona students were able to create an autonomous robot product for coffee shops in only three months with zero knowledge of robotics (see Barista robot).

This is the future, and it is good. In this future, thanks to standards, almost anyone will be able to build, design and program their own robotics product, similar to how PC stores are building computers today.

So my advice, as I said to the Singapore engineer, is to bet on ROS. Right now, it is the best option for a robotics standard.

 

Teaching ROS quickly to students

Lecturer Steffen Pfiffner of University of Weingarten in Germany is teaching ROS to 26 students at the same time at a very fast pace. His students, all of them within the Master on Computer Science of University of Weingarten, use only a web browser. They connect to a web page containing the lessons, a ROS development environment and several ROS based simulated robots. Using the browser, Pfiffner and his colleague Benjamin Stähle, are able to teach how to program with ROS quickly and to many students. This is what Robot Ignite Academy is made for.

“With Ignite Academy our students can jump right into ROS without all the hardware and software setup problems. And the best: they can do this from everywhere,” says Pfiffner.

Robot Ignite Academy provides a web service which contains the teaching material in text and video format, the simulations of several ROS based robots that the students must learn to program, and the development environment required to build ROS programs and test them on the simulated robot.

Student’s point of view

Students bring their own laptops to the class and connect to the online platform. From that moment, their laptop becomes a ROS development machine, ready to develop programs for many simulated real robots.

The Academy provides the text, the videos and the examples that the student has to follow. Then, the student creates her own ROS program and makes the robot perform a specific action. The student develops the ROS programs as if she is in a typical ROS development computer.

The main advantage is that students can use a Windows, Linux or Mac machine to learn ROS. They don’t even have to install ROS in their computers. The only prerequisite of the laptop is to have a browser. So students do not mess with all the installation problems that frustrate them (and the teachers!), especially when they are starting.

After class, students can continue with their learning at home, library or even the beach if there is a wifi available! All their code, learning material and simulations are stored online so they can access them from anywhere, anytime using any computer.

Teacher’s point of view

The advantage of using the platform is not only for the students but also for the teachers. Teachers do not have to create the material and maintain it. They do not have to prepare the simulations or work on multiple different computers. They don’t even have to prepare the exams!! (which are already provided by the platform).

So what are the teachers for?

By making use of the provided material, the teacher can concentrate on guiding the students by explaining the most confusing parts, answer questions, suggest modifications according to the level of each student, and adapt the pace to the different types of students.

This new method of teaching ROS is exploding among the Universities and High Schools that want to provide the latest and most practical teachings to their students. The method, developed by Robot Ignite Academy, combines a new way of teaching based on practice and an online learning platform. Those two points combined make the teaching of ROS a smooth experience and can potentially see the students’ knowledge base skyrocket.

As user Walace Rosa indicates in his video comment about Robot Ignite Academy:

It is a game changer [in] teaching ROS!

The method is becoming very popular in the robotics circuits too, and many teachers are using it for younger students. For example, High School Mundet in Barcelona is using it to teach ROS to 15 years old students.

Additionally, the academy provides a free online certification exam with different levels of knowledge certification. Many Universities are using this exam to certify that their students did learn the material since the exam is quite demanding.

Some examples of past events

  •  1 week ROS course in Barcelona for SMART-E project team members. This is a private course given by Robot Ignite Academy at Barcelona for 15 members of the SMART-E project that need to be up to speed with ROS fast. From 8th to 12nd of May 2017
  •  1 day ROS course for the Col·legi d’Enginyers de Barcelona. The 17th of May 2017.
  •  3 months course for University of La Salle in Barcelona within the Master on Automatics, Domotics and Robotics. From 10th of May to 29th of June 2017.
  •  1 weekend ROS course for teenagers in Bilbao, Spain. The 20th and 21st of May 2017.
  •  We can also organize a special event like these for you and your team.

Helpful ROS videos

Developing ROS programs for the Sphero robot

You probably know the Sphero robot. It is a small robot with the shape of a ball. In case that you have one, you must know that it is possible to control it using ROS, by installing in your computer the Sphero ROS packages developed by Melonee Wise and connecting to the robot using the bluetooth of the computer.

Now, you can use the ROS Development Studio to create ROS control programs for that robot, testing as you go by using the integrated simulation.

The ROS Development Studio (RDS) provides off-the-shelf a simulation of Sphero with a maze environment. The simulation provides the same interface as the ROS module created by Melonee, so you can test your develop and test the programs on the environment, and once working properly, transfer it to the real robot.

We created the simulation to teach ROS to the students of the Robot Ignite Academy. They have to learn ROS enough to make the Sphero get out of the maze by using odometry and IMU.

Using the simulation

To use the Sphero simulation on RDS go to rds.theconstructsim.com and sign in. If you select the Public simulations, you will quickly identify the Sphero simulation.

Press the red Play button. A new screen will appear giving you details about the simulation and asking you which launch file you want to launch. The main.launch selected by default is the correct one, so just press Run.

After a few seconds the simulation will appear together with the development environment for creating the programs for Sphero and testing them.

On the left hand side you have a notebook containing information about the robot and how to program it with ROS. This notebook contains just some examples, but it can be completed and/or modified at your will. As you can see it is an iPython notebook and follows its standard. So it is up to you to modify it, add new information or else. Remember that any change you do to the notebook will be saved in a simulation in your private area of RDS, so you can come back later and launch it with your modifications.

You must know that the code included in the notebook is directly executable by selecting the cell of the code (do a single click on it) and pressing the small play button at the top of the notebook. This means that, once you press that button, the code will be executed and start controlling the Sphero simulated robot for a few time-steps (remember to have the simulation activated (Play button of the simulation activated) to see the robot move).

On the center area, you can see the IDE. It is the development environment for developing the code. You can browse there all the packages related to the simulation or any other packages that you may create.

On the right hand side, you can see the simulation and beneath it, the shell. The simulation shows the Sphero robot as well as the environment of the maze. On the shell, you can issue commands in the computer that contains the simulation of the robot. For instance, you can use the shell to launch the keyboard controller and move the Sphero around. Try typing the following:

  • $ roslaunch sphero_gazebo keyboard_teleop.launch

Now you must be able to move the robot around the maze by pressing some keys of the keyboard (instructions provided on the screen).

You can also launch there Rviz, and then watch the robot, the frames and any other additional information you may want of the robot. Type the following:

  • $ rosrun rviz rviz

Then press the Screen red icon located at the bottom-left of the screen (named the graphical tools). A new tab should appear, showing how the Rviz is loading. After a while, you can configure the Rviz to show the information you desire.

There are many ways you can configure the screen to provide more focus to what interests you the most.

To end this post, I would like to indicate that you can download the simulation to your computer at any time, by doing right-click on the directories and selecting Download. You can also clone the The Construct simulations repository to download it (among other simulations available).

If you liked this tutorial, you may also enjoy these:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.