Page 398 of 430
1 396 397 398 399 400 430

New Horizon 2020 robotics projects: ROSIN

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). 

EuRobotics regularly publishes video interviews with projects, so that you can find out more about their activities. You can also see many of these projects at the upcoming European Robotics Forum (ERF) in Tampere Finland March 13-15.

This week features ROSIN: ROS-Industrial quality-assured robot software.


Objectives

Make ROS-Industrial the open-source industrial standard for intelligent industrial robots, and put Europe in a leading position within this global initiative.

Presently, potential users are waiting for improved quality and quantity of ROS-Industrial components, but both can improve only when more parties contribute and use ROS-Industrial. We will apply European funding to address both sides of this stalemate:

  • improving the availability of high-quality components, through Focused Technical Projects and software quality assurance.
  • increasing the community size, until ROS becomes self-sustaining as an industrial standard, through an education program and dissemination.

Expected Impact

ROSIN will propel the open-source robot software project ROS-Industrial beyond the critical mass required for further autonomous growth. As a result, it will become a widely adopted standard for industrial intelligent robot software components, e.g. for 3D perception and motion planning. System integrators, software companies, and robot producers will use the open-source framework and the rich choice in libraries to build their own closed-source derivatives which they will sell and for which they will provide support to industrial customers.

Partners

TECHNISCHE UNIVERSITEIT DELFT (TU Delft)
FRAUNHOFER GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V. (FHG)
IT-UNIVERSITETET I KOBENHAVN (ITU)
FACHHOCHSCHULE AACHEN (FHA)
FUNDACION TECNALIA RESEARCH & INNOVATION (TECNALIA)
ABB AB (ABB AB)

Coordinator:
Coordinator: Prof. Martijn Wisse
Contact: Dr. Carlos Hernandez
Delft University of Technology

Project website: www.rosin-project.eu

If you enjoyed reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Robo-picker grasps and packs

The “pick-and-place” system consists of a standard industrial robotic arm that the researchers outfitted with a custom gripper and suction cup. They developed an “object-agnostic” grasping algorithm that enables the robot to assess a bin of random objects and determine the best way to grip or suction onto an item amid the clutter, without having to know anything about the object before picking it up.
Image: Melanie Gonick/MIT
By Jennifer Chu

Unpacking groceries is a straightforward albeit tedious task: You reach into a bag, feel around for an item, and pull it out. A quick glance will tell you what the item is and where it should be stored.

Now engineers from MIT and Princeton University have developed a robotic system that may one day lend a hand with this household chore, as well as assist in other picking and sorting tasks, from organizing products in a warehouse to clearing debris from a disaster zone.

The team’s “pick-and-place” system consists of a standard industrial robotic arm that the researchers outfitted with a custom gripper and suction cup. They developed an “object-agnostic” grasping algorithm that enables the robot to assess a bin of random objects and determine the best way to grip or suction onto an item amid the clutter, without having to know anything about the object before picking it up.

Once it has successfully grasped an item, the robot lifts it out from the bin. A set of cameras then takes images of the object from various angles, and with the help of a new image-matching algorithm the robot can compare the images of the picked object with a library of other images to find the closest match. In this way, the robot identifies the object, then stows it away in a separate bin.

In general, the robot follows a “grasp-first-then-recognize” workflow, which turns out to be an effective sequence compared to other pick-and-place technologies.

“This can be applied to warehouse sorting, but also may be used to pick things from your kitchen cabinet or clear debris after an accident. There are many situations where picking technologies could have an impact,” says Alberto Rodriguez, the Walter Henry Gale Career Development Professor in Mechanical Engineering at MIT.

Rodriguez and his colleagues at MIT and Princeton will present a paper detailing their system at the IEEE International Conference on Robotics and Automation, in May. 

Building a library of successes and failures

While pick-and-place technologies may have many uses, existing systems are typically designed to function only in tightly controlled environments.

Today, most industrial picking robots are designed for one specific, repetitive task, such as gripping a car part off an assembly line, always in the same, carefully calibrated orientation. However, Rodriguez is working to design robots as more flexible, adaptable, and intelligent pickers, for unstructured settings such as retail warehouses, where a picker may consistently encounter and have to sort hundreds, if not thousands of novel objects each day, often amid dense clutter.

The team’s design is based on two general operations: picking — the act of successfully grasping an object, and perceiving — the ability to recognize and classify an object, once grasped.   

The researchers trained the robotic arm to pick novel objects out from a cluttered bin, using any one of four main grasping behaviors: suctioning onto an object, either vertically, or from the side; gripping the object vertically like the claw in an arcade game; or, for objects that lie flush against a wall, gripping vertically, then using a flexible spatula to slide between the object and the wall.

Rodriguez and his team showed the robot images of bins cluttered with objects, captured from the robot’s vantage point. They then showed the robot which objects were graspable, with which of the four main grasping behaviors, and which were not, marking each example as a success or failure. They did this for hundreds of examples, and over time, the researchers built up a library of picking successes and failures. They then incorporated this library into a “deep neural network” — a class of learning algorithms that enables the robot to match the current problem it faces with a successful outcome from the past, based on its library of successes and failures.

“We developed a system where, just by looking at a tote filled with objects, the robot knew how to predict which ones were graspable or suctionable, and which configuration of these picking behaviors was likely to be successful,” Rodriguez says. “Once it was in the gripper, the object was much easier to recognize, without all the clutter.”

From pixels to labels

The researchers developed a perception system in a similar manner, enabling the robot to recognize and classify an object once it’s been successfully grasped.

To do so, they first assembled a library of product images taken from online sources such as retailer websites. They labeled each image with the correct identification — for instance, duct tape versus masking tape — and then developed another learning algorithm to relate the pixels in a given image to the correct label for a given object.

“We’re comparing things that, for humans, may be very easy to identify as the same, but in reality, as pixels, they could look significantly different,” Rodriguez says. “We make sure that this algorithm gets it right for these training examples. Then the hope is that we’ve given it enough training examples that, when we give it a new object, it will also predict the correct label.”

Last July, the team packed up the 2-ton robot and shipped it to Japan, where, a month later, they reassembled it to participate in the Amazon Robotics Challenge, a yearly competition sponsored by the online megaretailer to encourage innovations in warehouse technology. Rodriguez’s team was one of 16 taking part in a competition to pick and stow objects from a cluttered bin.

In the end, the team’s robot had a 54 percent success rate in picking objects up using suction and a 75 percent success rate using grasping, and was able to recognize novel objects with 100 percent accuracy. The robot also stowed all 20 objects within the allotted time.

For his work, Rodriguez was recently granted an Amazon Research Award and will be working with the company to further improve pick-and-place technology — foremost, its speed and reactivity.

“Picking in unstructured environments is not reliable unless you add some level of reactiveness,” Rodriguez says. “When humans pick, we sort of do small adjustments as we are picking. Figuring out how to do this more responsive picking, I think, is one of the key technologies we’re interested in.”

The team has already taken some steps toward this goal by adding tactile sensors to the robot’s gripper and running the system through a new training regime.

“The gripper now has tactile sensors, and we’ve enabled a system where the robot spends all day continuously picking things from one place to another. It’s capturing information about when it succeeds and fails, and how it feels to pick up, or fails to pick up objects,” Rodriguez says. “Hopefully it will use that information to start bringing that reactiveness to grasping.”

This research was sponsored in part by ABB Inc., Mathworks, and Amazon.

#254: Collaborative Systems for Drug Discovery, with Peter Harris



In this episode, Abate interviews Peter Harris from HighRes Biosolutions about automation in the field of drug discovery. At HighRes Biosolutions they are developing modular robotic systems that work alongside scientists to automate laboratory tasks. Because the requirements of each biomedical research laboratory are so varied, the robotic systems are specifically tailored to meet the requirements of each lab.


Peter Harris
Peter Harris is the CEO of HighRes Biosolutions. Prior to HighRes, Peter was VP and Managing Director at Axel Johnson, Inc. He spent most his career as the President & CEO of Cadence, Inc., a high technology medical device manufacturing and engineering firm enabling medical companies to bring better devices to market faster. Peter has been a Visiting Executive Lecturer at the Darden School of Business at the University of Virginia for over 10 years.

 

 

Links

Robots in Depth with Peter Corke


In this episode of Robots in Depth, Per Sjöborg speaks with Peter Corke, distinguished professor of robotic vision from Queensland University of Technology, and Director of the ARC Centre of Excellence for Robotic Vision. Peter is well known for his work in computer vision and has written one of the books that defines the area. He talks about how serendipity made him build a checkers playing robot and then move on to robotics and machine vision. We get to hear about how early experiments with “Blob Vision” got him interested in analyzing images and especially moving images, and his long and interesting journey giving robots eyes to see the world.

The interview ends with Peter adding a new item to the CV, fashion model, when he shows us the ICRA 2018 T-shirt!

Programming drones to fly in the face of uncertainty

Researchers trail a drone on a test flight outdoors.
Photo: Jonathan How/MIT

Companies like Amazon have big ideas for drones that can deliver packages right to your door. But even putting aside the policy issues, programming drones to fly through cluttered spaces like cities is difficult. Being able to avoid obstacles while traveling at high speeds is computationally complex, especially for small drones that are limited in how much they can carry onboard for real-time processing.

Many existing approaches rely on intricate maps that aim to tell drones exactly where they are relative to obstacles, which isn’t particularly practical in real-world settings with unpredictable objects. If their estimated location is off by even just a small margin, they can easily crash.

With that in mind, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed NanoMap, a system that allows drones to consistently fly 20 miles per hour through dense environments such as forests and warehouses.

One of NanoMap’s key insights is a surprisingly simple one: The system considers the drone’s position in the world over time to be uncertain, and actually models and accounts for that uncertainty.

“Overly confident maps won’t help you if you want drones that can operate at higher speeds in human environments,” says graduate student Pete Florence, lead author on a new related paper. “An approach that is better aware of uncertainty gets us a much higher level of reliability in terms of being able to fly in close quarters and avoid obstacles.”

Specifically, NanoMap uses a depth-sensing system to stitch together a series of measurements about the drone’s immediate surroundings. This allows it to not only make motion plans for its current field of view, but also anticipate how it should move around in the hidden fields of view that it has already seen.

“It’s kind of like saving all of the images you’ve seen of the world as a big tape in your head,” says Florence. “For the drone to plan motions, it essentially goes back into time to think individually of all the different places that it was in.”

The team’s tests demonstrate the impact of uncertainty. For example, if NanoMap wasn’t modeling uncertainty and the drone drifted just 5 percent away from where it was expected to be, the drone would crash more than once every four flights. Meanwhile, when it accounted for uncertainty, the crash rate reduced to 2 percent.

The paper was co-written by Florence and MIT Professor Russ Tedrake alongside research software engineers John Carter and Jake Ware. It was recently accepted to the IEEE International Conference on Robotics and Automation, which takes place in May in Brisbane, Australia.

For years computer scientists have worked on algorithms that allow drones to know where they are, what’s around them, and how to get from one point to another. Common approaches such as simultaneous localization and mapping (SLAM) take raw data of the world and convert them into mapped representations.

But the output of SLAM methods aren’t typically used to plan motions. That’s where researchers often use methods like “occupancy grids,” in which many measurements are incorporated into one specific representation of the 3-D world.

The problem is that such data can be both unreliable and hard to gather quickly. At high speeds, computer-vision algorithms can’t make much of their surroundings, forcing drones to rely on inexact data from the inertial measurement unit (IMU) sensor, which measures things like the drone’s acceleration and rate of rotation.

The way NanoMap handles this is that it essentially doesn’t sweat the minor details. It operates under the assumption that, to avoid an obstacle, you don’t have to take 100 different measurements and find the average to figure out its exact location in space; instead, you can simply gather enough information to know that the object is in a general area.

“The key difference to previous work is that the researchers created a map consisting of a set of images with their position uncertainty rather than just a set of images and their positions and orientation,” says Sebastian Scherer, a systems scientist at Carnegie Mellon University’s Robotics Institute. “Keeping track of the uncertainty has the advantage of allowing the use of previous images even if the robot doesn’t know exactly where it is and allows in improved planning.”

Florence describes NanoMap as the first system that enables drone flight with 3-D data that is aware of “pose uncertainty,” meaning that the drone takes into consideration that it doesn’t perfectly know its position and orientation as it moves through the world. Future iterations might also incorporate other pieces of information, such as the uncertainty in the drone’s individual depth-sensing measurements.

NanoMap is particularly effective for smaller drones moving through smaller spaces, and works well in tandem with a second system that is focused on more long-horizon planning. (The researchers tested NanoMap last year in a program tied to the Defense Advanced Research Projects Agency, or DARPA.)

The team says that the system could be used in fields ranging from search and rescue and defense to package delivery and entertainment. It can also be applied to self-driving cars and other forms of autonomous navigation.

“The researchers demonstrated impressive results avoiding obstacles and this work enables robots to quickly check for collisions,” says Scherer. “Fast flight among obstacles is a key capability that will allow better filming of action sequences, more efficient information gathering and other advances in the future.”

This work was supported in part by DARPA’s Fast Lightweight Autonomy program.

Research arrows are up with one common thread: digital data

MasterCard CEO Ajay Banga said that “data is the new oil.” Masayoshi Son, CEO of SoftBank, says that artificial intelligence combined with data gathered by billions of sensors is bringing on an information revolution.

Manufacturers everywhere are changing – some, with government assistance – because of new technologies, new competitors, new ecosystems, and new ways of doing business. Companies that adopt these new digital-based capabilities are creating value in their businesses and becoming leaders of their industries.

The following 18 recent research reports, grouped into five categories, cover the robotics industry. All indicate double-digit compound annual growth rates (CAGR) and each makes the connection of digitalization and data to growth and economic efficiencies.

  1. Industrial & collaborative robots
  2. Service & surgical robotics
  3. Agricultural & food handling robotics
  4. Surveillance robots, drones & ROVs
  5. Autonomous vehicles

Industrial & collaborative robots

  1. Feb 2018, 123 pages, QY Research, $3,600
    QY doesn’t provide forecasts in their promotional materials.
  2. Dec 2017, 23 pages, ABI Research, $3,500
    To 2025, global revenue of collaborative robotics shipments is forecast to grow at a 49.8% CAGR compared to 12.1% for Industrial robots and 23.2% per cent for service robotics.
  3. Dec 2017, 95 pages, TechNavio, $2,500
    The global robotic injection molding machine market to grow at a CAGR of 4.94% during the period 2017-2021.
  4. Dec 2017, 93 pages, TechNavio, $2,500
    The global handling, degating, and deflashing robots market is expected to grow at a CAGR of close to 11% from 2017-2021.

surgeon-robotService & surgical robots

  1. Dec 2017, 127 pages, Tractica, $4,200
    Tractica forecasts that worldwide shipments of enterprise robots (ag, construction, warehousing and logistics, remote presence, customer service) will grow from 83,000 units in 2016 to 1.2 million units in 2022, at a CAGR of 57% during that period.  Revenue will increase from $5.9 billion in 2016 to $67.9 billion in 2022.
  2. Feb 2018, BIS Research, $4,599
    Analyzes progress of Intuitive Surgical Inc., Stryker, Mazor Robotics, Hansen Medical, MedRobotics, TransEnterix, Accuray, Renishaw, Think Surgical, Synaptive Medical, Titan Medical and Smith & Nephew.
  3. Global service report market
    Jan 2018, 124 pages, QY Research, $3,600
    No forecasts available.

Agricultural & food handling robotics

  1. Dec 2017, Progressive Markets, $3,619
    The agricultural robots market is likely to garner $15.3 billion by 2025, growing at a CAGR of 20.95% during the forecast period from 2018 to 2025.
  2. Dec 2017, AlphaBrown, $2,975
    Early adopters (500+ acres and greenhouse growers) of harvesting robotics to reach $5.5bn according to study of 1,300 growers whose principle reason is to off-set the cost of labor.
  3. Nov 2017, 90 pages, Grand View Research, $4,950
    The global food robotics market is anticipated to reach USD 3.35 billion by 2025.
  4. Jan 2018, 213 pages, IDTechEx, $4,995
    Forecasts based on technology roadmaps, suggest that the market will grow to $35Bn by 2038 with the potential to reach higher levels- around $45Bn- in a highly accelerated technology progression and market adoption scenario. The infection point will arrive in 2024 onwards. At this point, sales will rapidly grow.
  5. Dec 2017, 104 pages, QY Research, $2,900
    No forecasts available.
  6. Dec 2017, 99 pages, QY Research, $3,400
    No forecasts available.

Surveillance robots, drones, mobile robots & ROVs

  1. Sep 2017, 70 pages, TechNavio, $3,500
    Forecasts the global surveillance robots market to grow at a CAGR of 12.31% during the period 2017-2021.
  2. Oct 2017, 170 pages, Persistence Market Research, $4,900
    Focuses on maritime security coupled with greater offshore oil & gas production are the key drivers of the autonomous underwater vehicle market that is anticipated to push past $445 million by end 2022 at a CAGR of 6.1%.
  3. Dec 2017, 193 pages, QY Research, $2,900
    The Autonomous Mobile Robots industry was $158 million in 2016 and is projected to reach $390 million by 2022, at a CAGR of 16.26% between 2016 and 2022.
  4. Sep 2017, 88 pages, Skylogic Research, $1,450
    An analysis of 2,600 drone buyers, providers and business users but no forecasts available.

5G autonomous bus at 2018 Olympics.

Autonomous vehicles

  1. Feb 2018, 93 pages, Tractica, $4,500
    Tractica forecasts that annual unit shipments will increase from approximately 343 vehicles in 2017 to 188,000 units in 2022.
Page 398 of 430
1 396 397 398 399 400 430