Page 235 of 337
1 233 234 235 236 237 337

#294: Autonomous Bricklaying by FBR, with Mark Pivac



In this episode, Ron Vanderkley interviews Mark Pivac, Chief Technical Officer and co-founder of FBR (formerly Fastbrick Robotics) about the world’s first end-to-end autonomous bricklaying robot, ‘Hadrian X’. Three years after his first interview, we catch up with Pivac to see how FBR has expanded its operation and chat about their latest commercial prototype, ‘Hadrian X’, as well as the future of the robotic construction industry.

Mark Pivac is the primary inventor of FBR’s automated bricklaying technology. He is an aeronautical and mechanical engineer with over 25 years’ experience working on the development of high technology equipment ranging from lightweight aircraft to heavy off-road equipment. He has 20 years’ experience working with pro/engineer 3D CAD software as well as high-level mathematics, including matrix mathematics, robot transformations and vector mathematics for machine motion. Mark has also worked extensively with design, commissioning and fault finding on servo controlled motion systems achieving very high dynamic performance.

Links

#SciRocChallenge tests robots in a realistic smart shopping environment

OLYMPUS DIGITAL CAMERA

The city of Milton Keynes hosts from the 17th to the 21st of September the European Robotics League – Smart Cities Robotics Challenge (SciRoc Challenge). For the first time, international researchers in robotics and artificial intelligence meet in a shopping mall to demonstrate the state of the art in robotics within the context of smart cities and specifically smart shopping.

The European Robotics League, funded by the European Commission to advance research, development and innovation in robotics and artificial intelligence, is the umbrella for three robotics competitions: ERL Consumer, ERL Emergency and ERL Professional service robots. All three leagues meet every two years in the ERL Smart Cities Robotics Challenge, showcasing how real robots can make our lives better in urban environments.

The Challenge
The SciRoc challenge will be held in the smart shopping mall of the Centre:MK. The challenge focuses on smart shopping and is divided into a series of episodes, each consisting of a task to be performed addressing specific research challenges. In order to accomplish their tasks, robots will have to cooperate with the simulated digital infrastructure of a smart shopping mall. Although the competing robots will face mock scenarios, the environment and difficulties are intended to be as realistic as possible, including the interaction with people from the public.

The episodes are organised into three categories:

Human-Robot Interaction (HRI) and Mobility, focuses on robots able to show social behaviour tasks such as verbal interaction with humans.

Manipulation, focuses on robots able to achieve manipulation tasks.

Emergency, defines tasks addressed autonomously by small aerial robots

The SciRoc consortium has designed the episodes with the collaboration of external experts from research and industry in the different categories.

The five episodes that are part of the Smart Cities Robotics challenge, and have been chosen by the robotics community from a previous list of fourteen are:

Deliver coffee shop orders (E03)
In this episode the robot will assist customers in a coffee shop by taking orders and bringing objects to and from customers’ tables.
The main functionality evaluated in this episode is people perception. Additional side functionalities are navigation, speech synthesis and recognition.

Team SocRob and their robot MOnarCH getting ready to serve coffees. Photo Credits: European Robotics League

Take the elevator(E04)
The robot must take the elevator crowded with customers to reach a service located in another floor.
The robot should interact with the MK:DataHub to discover which floor it must reach to accomplish its task. The robot must be able to take the elevator together with regular customers of the shopping mall.

Shopping pick and pack (E07)
The robot is in one the booths of the mall. On the shelf of the booth are the goods displayed for sale to the customers.
The customers can place orders through a tablet. The robot must move behind the display and collect the requested packages for the customer, place them in a box, and place the box on a tray where the customer can pick it up.

Open the door (E10)
Doors are ubiquitous in human environments. There are many types of doors, some of which are easier to operate than others for a robot.
In this episode the robot will identify a door, approach it and open it completely within a specified tolerance from 90°.

Fast delivery of emergency pills (E12)
The aerial robot must attend an emergency situation in which a first-aid kit needs to be delivered to a customer.
The robot must be able to fly autonomously to the customer location as fast as possible.

Teams can participate in one or more episodes depending on to their research interests.

The teams
A total of 10 teams from 5 different countries classified to compete in the first edition of the ERL Smart Cities Robotics Challenge.
The teams participating in the SciRoc Challenge 2019 are:

1. SocRob@Home – The Soccer Robots or Society of Robots (SocRob) team is a long-term research project of the Instituto Superior Tecnico, Portugal. Funded in 1998, the team has a broad experience participating in robotics competitions such as RoboCup Soccer, RoboCup@Home, RocKin@Homeand and ERL Consumer Robots. The team has special interest in the topic of HRI and mobility and will participate in Episode 3 – Deliver coffee shop orders.

2. Robotics Lab UC3 – This multidisciplinary research group from the Universidad Carlos III de Madrid, Spain, has previously participated in the ERL Consumer Robots league with one of the TIAGO platforms sponsored by PAL Robotics. The team will demonstrate their robot abilities to interact with humans in the Episode 4 – Take the elevator.

3. Gentlebots – Gentlebots is a team of researchers in robotics from the Rey Juan Carlos University and the University of León, Spain. Their research focus is on software development that allows robots to exhibit intelligent behaviours and they have competed in RoboCup@Home. They will participate in Episode 3 and Episode 4.

4. b-it-bots – The team from the Hochschule Bonn-Rhein-Sieg, Germany, has a broad experience with industrial and domestic robots in RoboCup@Work and RoboCup@Home. Winners of the ERL Professional Robots Season 2018-2019, they will participate in Episode 7 – Shopping pick and pack and Episode 10- Open the door, both episodes involve manipulation tasks.

Team b-it-bots. Photo Credits: European Robotics League

“You learn a lot of things at University, but not always how to apply them in real life. Robotics competitions are the best place for students to use their knowledge in real scenarios and learn from mistakes. In this competition we are solving problems which do not have a solution yet, so the students cannot download a tutorial or watch a YouTube video, they need create their own engineered solutions. This is learning by doing, not learning by listening” says Deebul Nair, b-it-bots Team Manager.

5. Leeds Autonomous Service Robots – The team of the newly established AI group of the University of Leeds, UK, studies long-term decision making and adaptation. They will demonstrate how it is applied in robotics by participating in Episodes 3, 4 and 10.

6. HEARTS – The Healthcare Engineering and Assistive Robotics Technology and Services (HEARTS) team is based in the Bristol Robotics Laboratory, a collaboration between the University of the West of England (UWE) and the University of Bristol, UK. The team was formed to provide an opportunity for students to get hands-on experience of developing assistive robots that are robust and reliable to assist people in a range of situations. The HEARTS will participate in Episode 4 – Take the elevator.

“Participating in the SciRoc challenge gives me the opportunity to use a social robot platform like Pepper in applications different from the ones of my PhD. Pepper is designed for interaction with humans, so it is a good platform for the episode of the elevator in which we are competing” explains Beth Mackey, PhD student and member of the HEART team.

Team HEARTS. Photo Credits: European Robotics League

7. UWE Aero – This aerial team of the University of the West of England, UK, is made up of a group of students interested in aerospace projects, focusing in Unmanned aerial vehicles (UAVs). Their backgrounds in aerospace engineering, 3D printing and computer science are a perfect combination to participate in Episode 12 – Fast delivery of emergency pills.

8. CATIE Robotics – The technology transfer center CATIE, France, created in early 2018 this robotics team with the aim of exploring service robotics. CATIE robotics was created in early 2018, and has participated at RoboCup@Home. The team aims to explore service robotics from an application-driven perspective.

“Competitions bring together people from different technical backgrounds under a common goal. They give visibility and the opportunity to be part of a community of experts. Robotics competitions are always very motivating” says Remi Fabre, Team Leader of CATIE robotics.

Team CATIE robotics setting up their robot. Photo Credits: European Robotics League

They will apply their knowledge in control and grasping in Episode 7 – Shopping pick and pack.

9. eNTiTy – Everbots – eNTiTy is the team of the R&D department of NTT Disruption, Spain. The team focuses on developing social robotics applications for clients. They first participated in the ERL Consumer tournament in IROS 2018 conference in Madrid.

“Participating in robotics competitions such as the SciRoc challenge help us advance the state of the art in social robotics and put together a good team of researchers. It gives us the opportunity to test different algorithms, such as vision modules, that we can then apply to other products” says Julian Caro Linares, robotics engineer of NTT Disruption.

They will participate with a PAL Robotics TIAGO robot in Episode 3 – Deliver coffee shop orders and Episode 4 – Take the elevator.
Irene Diaz-Portales, computer vision researcher adds “we chose TiAGo because it’s an excellent robotics platform for developing ROS modules.”

Team eNTiTy testing TIAGO robot. Photo Credits: European Robotics League

10. TeamBathDrones Research – The TeamBathDrones Research is the University of Bath, UK, competitive autonomous aircraft team. The team is formed of a mixture of lecturers, PhD and undergraduate students from the engineering faculty. Through entering in the ERL Smart cities challenge in the emergency category, they aim to demonstrate the application of collision avoidance by in-flight risk minimisation in Episode 12 – Fast delivery of emergency pills.

Which teams will successfully address the SciRoc Challenge Episodes? Don’t miss the updates starting this week.

Catalia Health and Pfizer collaborate on robots for healthcare

New robot platform improves patient experience using AI to help patients navigate barriers and health care challenges

SAN FRANCISCO, Sept. 12, 2019 /PRNewswire/ — Catalia Health and Pfizer today announced they have launched a pilot program to explore patient behaviors outside of clinical environments and to test the impact regular engagement with artificial intelligence (AI) has on patients’ treatment journeys. The 12-month pilot uses the Mabu® Wellness Coach, a robot that uses artificial intelligence to gather insights into symptom management and medication adherence trends in select patients.

The Mabu robot can interact with patients using AI algorithms to engage in tailored, voice-based conversations. Mabu “talks” with patients about how they are feeling and helps answer questions they may have about their treatment. The Mabu Care Insights Platform then delivers detailed data and insights to clinicians at a specialty pharmacy provider to help human caregivers initiate timely and appropriate outreach to the patient. The goal is to help better manage symptoms and address patient questions in real-time.

“At Catalia Health we’ve seen firsthand the benefits that AI has brought to healthcare for both the patient and the healthcare systems,” said Cory Kidd, founder and CEO of Catalia Health. “Our work with Pfizer allows us to engage with patients on a larger scale and therefore gain access to more insights and data that we hope can improve health outcomes.”

Mabu is helping to deliver personalized care by gaining insights that allow the specialty pharmacy to reach out to patients as they express challenges in managing their conditions. Mabu also generates health tips and reminders to help patients get additional information about their condition and treatment that may help them along the way. Over time, it is our goal that Mabu can help patients navigate barriers and health care challenges that are often a part of managing a chronic disease.

“The healthcare system is overburdened, and as a result, patients often seek more-coordinated care and information. Through this collaboration with Catalia Health, we hope to learn through real-time data and insights about challenges patients face, outside the clinical setting, with the goal to improve their treatment journeys in the future,” said Lidia Fonseca, Chief Digital and Technology Officer at Pfizer. “This pilot is an example of how we are working to develop digital companions for all our medicines to better support patients in their treatment journeys.”

The pilot program was officially announced on stage at the National Association of Specialty Pharmacy’s Annual Meeting & Expo on September 10, 2019. Initial pilot data will be available in the coming months. For more information, visit www.cataliahealth.com

About Catalia Health

Catalia Health is a San Francisco-based patient care management company founded by Cory Kidd, Ph.D., in 2014. Catalia Health provides an effective and scalable solution for individuals managing chronic disease or taking medications on an ongoing basis. The company’s AI-powered robot, Mabu, enables healthcare providers and pharmaceutical companies to better support patients living with chronic illness. Mabu uses a voice-based interface designed for simple, intuitive use by a wide variety of patients in remote care environments. The cloud-based platform delivers unique conversations to patients each time they have a conversation with Mabu.

Catalia Health’s care management programs are tailored to increase clinically appropriate medication adherence, improve symptom management and reduce the likelihood that a patient is readmitted to the hospital after being discharged.

For more information, visit www.cataliahealth.com

A gentle grip on gelatinous creatures

Jellyfish are about 95% water, making them some of the most diaphanous, delicate animals on the planet. But the remaining 5% of them have yielded important scientific discoveries, like green fluorescent protein (GFP) that is now used extensively by scientists to study gene expression, and life-cycle reversal that could hold the keys to combating aging. Jellyfish may very well harbor other, potentially life-changing secrets, but the difficulty of collecting them has severely limited the study of such “forgotten fauna.” The sampling tools available to marine biologists on remotely operated vehicles (ROVs) were largely developed for the marine oil and gas industries, and are much better-suited to grasping and manipulating rocks and heavy equipment than jellies, often shredding them to pieces in attempts to capture them.

A new ultra-soft gripper developed at the Wyss Institute and Baruch College uses fettuccini-like silicone “fingers” inflated with water to gently but firmly grasp jellyfish and release them without harm, allowing scientists to safely interact with these delicate creatures in their own habitats. Credit: Anand Varma

Now, a new technology developed by researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering, John A. Paulson School of Engineering and Applied Sciences (SEAS), and Baruch College at CUNY offers a novel solution to that problem in the form of an ultra-soft, underwater gripper that uses hydraulic pressure to gently but firmly wrap its fettuccini-like fingers around a single jellyfish, then release it without causing harm. The gripper is described in a new paper published in Science Robotics.

“Our ultra-gentle gripper is a clear improvement over existing deep-sea sampling devices for jellies and other soft-bodied creatures that are otherwise nearly impossible to collect intact,” said first author Nina Sinatra, Ph.D., a former graduate student in the lab of Robert Wood at the Wyss Institute. “This technology can also be extended to improve underwater analysis techniques and allow extensive study of the ecological and genetic features of marine organisms without taking them out of the water.”

The gripper’s six “fingers” are composed of thin, flat strips of silicone with a hollow channel inside bonded to a layer of flexible but stiffer polymer nanofibers. The fingers are attached to a rectangular, 3D-printed plastic “palm” and, when their channels are filled with water, curl in the direction of the nanofiber-coated side. The fingers each exert an extremely low amount of pressure – about 0.0455 kPA, or less than one-tenth of the pressure of a human’s eyelid on their eye. By contrast, current state-of-the-art soft marine grippers, which are used to capture delicate but more robust animals than jellyfish, exert about 1 kPA.

First author Nina Sinatra, Ph.D. tests the ultra-soft gripper on a jellyfish at the New England Aquarium. Credit: Anand Varma

The researchers fitted their ultra-gentle gripper to a specially created hand-held device and tested its ability to grasp an artificial silicone jellyfish in a tank of water to determine the positioning and precision required to collect a sample successfully, as well as the optimum angle and speed at which to capture a jellyfish. They then moved on to the real thing at the New England Aquarium, where they used the grippers to grab swimming moon jellies, jelly blubbers, and spotted jellies, all about the size of a golf ball.

The gripper was successfully able to trap each jellyfish against the palm of the device, and the jellyfish were unable to break free from the fingers’ grasp until the gripper was depressurized. The jellyfish showed no signs of stress or other adverse effects after being released, and the fingers were able to open and close roughly 100 times before showing signs of wear and tear.

“Marine biologists have been waiting a long time for a tool that replicates the gentleness of human hands in interacting with delicate animals like jellyfish from inaccessible environments,” said co-author David Gruber, Ph.D., who is a Professor of Biology and Environmental Science at Baruch College, CUNY and a National Geographic Explorer. “This gripper is part of an ever-growing soft robotic toolbox that promises to make underwater species collection easier and safer, which would greatly improve the pace and quality of research on animals that have been under-studied for hundreds of years, giving us a more complete picture of the complex ecosystems that make up our oceans.”

The ultra-soft gripper is the latest innovation in the use of soft robotics for underwater sampling, an ongoing collaboration between Gruber and Wyss Founding Core Faculty member Robert Wood, Ph.D. that has produced the origami-inspired RAD sampler and multi-functional “squishy fingers” to collect a diverse array of hard-to-capture organisms, including squids, octopuses, sponges, sea whips, corals, and more.

“Soft robotics is an ideal solution to long-standing problems like this one across a wide variety of fields, because it combines the programmability and robustness of traditional robots with unprecedented gentleness thanks to the flexible materials used,” said Wood, who is the co-lead of the Wyss Institute’s Bioinspired Soft Robotics Platform, the Charles River Professor of Engineering and Applied Sciences at SEAS, and a National Geographic Explorer.

“At the Wyss Institute we are always asking, ‘How can we make this better?’ I am extremely impressed by the ingenuity and out-of-the-box thinking that Rob Wood and his team have applied to solve a real-world problem that exists in the open ocean, rather than in the laboratory. This could help to greatly advance ocean science,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School, the Vascular Biology Program at Boston Children’s Hospital, and Professor of Bioengineering at SEAS.

A new ultra-soft gripper developed at the Wyss Institute uses fettuccini-like silicone “fingers” inflated with water to gently but firmly grasp jellyfish and release them without harm, allowing scientists to safely interact with these delicate creatures in their own habitats. Credit: Wyss Institute at Harvard

The team is continuing to refine the ultra-soft gripper’s design, and aims to conduct studies that evaluate the jellyfishes’ physiological response to being held by the gripper, to more definitively prove that they do not cause the animals stress. Wood and Gruber are also co-Principal Investigators of the Schmidt Ocean Institute’s “Designing the Future” project, and will be further testing their various underwater robots on an upcoming expedition aboard the research ship Falkor in 2020.

Additional authors of the paper are Clark Teeple, Daniel Vogt, M.S., and Kevin Kit Parker, Ph.D. from the Wyss Institute and Harvard SEAS. Parker is a Founding Core Faculty member of the Wyss Institute and the Tarr Family Professor of Bioengineering and Applied Physics at SEAS. The research was supported by the National Science Foundation, The Harvard University Materials Research Science and Engineering Center, The National Academies Keck Futures Initiative, and the National Geographic Society.

Modelling of a Transport Robot Fleet in Simulink

InSystems and Model Engineering Solutions jointly developed a Simulink model of an adaptive fleet of InSystems’ proANT collaborating transport robots. The goal was to capture the desired adaptive system behavior to more effectively deal with the typical goals and challenges of collaborative embedded system groups (CSGs). A fleet of robots has to react to dynamic changes in the policy of the manufacturing execution system or the number and nature of its members to safeguard its functionality. The consistent application of a model-based development process for automation systems offers a variety of benefits to deal with these challenges. First and foremost, the specification of the CSG in the form of executable models allows for a fully virtual simulated representation of the robot fleet. This provides a sound foundation to efficiently develop and maintain the actual system. To exploit the full potential, a model-based approach relies on the reusability of models and test beds throughout the different development phases. Secondly, the model-based development process profits from a fully integrated tool chain that highly automatizes associated development activities. These include requirements management, modelling, and simulation as well as integrated quality assurance tasks, most notably, model-based static analysis and requirements-based testing. Tools such as the MES Model Examiner® and the MES Test Manager® are beneficial in streamlining the process.

Abbildung 1: Figure 2: Fleet of proANT AGVs at Bierbaum Unternehmensgruppe, transporting open barrels.
Image Source: Model Engineering Solutions – GmbH
www.model-engineers.com

The post Modelling of a Transport Robot Fleet in Simulink appeared first on Roboticmagazine.

Intel RealSense 3D Camera for robotics & SLAM (with code)

The Intel® RealSense™ D400 Depth Cameras. Credit: Intel Corporation

The Intel RealSense cameras have been gaining in popularity for the past few years for use as a 3D camera and for visual odometry. I had the chance to hear a presentation from Daniel Piro about using the Intel RealSense cameras generally and for SLAM (Simultaneous Localization and Mapping). The following post is based on his talk.

Comparing depth and color RGB images

Depth Camera (D400 series)

Depth information is important since that gives us the information needed to understand shapes, sizes, and distance. This lets us (or a robot) know how far it is from items to avoid running into things and to plan path around obstacles in the image field of view. Traditionally this information has come from RADAR or LIDAR, however in some applications we can also get that from cameras. In cameras we often get depth from using 2 cameras for stereo vision.

The Intel RealSense Depth camera (D400 series) uses stereoscopic depth sensing to determine the range to an item. So essentially it has two cameras and can do triangulation from them for stereo. This sensor uses two infrared cameras for the stereo and then also has an RGB camera onboard. So you can get 4 data products from the sensor; RGB image, depth image, left infrared image, and right infrared image. Think of each image frame as a 3D snapshot of the environment, where each color (RGB) pixel also has a range value (depth) to the item that is in the image. The farther the items are from the camera the greater the range/depth error will be.

Applications In Computer Vision

The D400 cameras have an infrared projector for getting better features on surfaces with minimal texture in the infrared camera for computing the stereo reconstruction. This projector can be turned on and off if you want. Disabling the projector is often useful for tracking applications (since the projected dots don’t move with the items being tracked).

Realsense D400 Cameras Versions

One thing to be aware is that the infrared images are rectified (to make the images look the same in a common plane) in the camera, however the RGB camera image is not rectified. This means that if you want the depth and RGB images to line up well, you need to manually rectify the RGB image.

Pro Tip 1: The driver has a UV map to help map from the depth pixel to the RGB image to help account for the difference in image sizes. This lets you match the depth to RGB image data points better.

Pro Tip 2: If using the D435i (the IMU version), use the timestamps from the images and IMU to synchronize the two sensors. If you use system time (from your computer) there will be more error (partially due to weird USB timing).

Pro Tip 3: The cameras have Depth Presets. These are profiles that let you optimize various settings for various conditions. Such as high density, high accuracy, etc..

Pro Tip 4: Make sure your exposure is set correctly. If you are using auto exposure try changing the Mean Intensity Set Point (the setting is not where exposure is, it is under AE control, not the most obvious).
If you want to use manual exposure you can play with are Exposure setpoint and Gain constant. Start with the exposure setpoint, then adjust the gain.

You might also want to see this whitepaper for more methods of tuning the cameras for better performance.

Visual Odometry & SLAM (T265)

Visual odometry is the generic term for figuring out how far you have moved using a camera. This is as opposed to “standard” odometry using things such as wheel encoders, or inertial odometry with a IMU.

These are not the only ways to get odometry. Having multiple sources of odometry is nice so you can take the advantages of each type and fuse them together (such as with a Kalman filter).

RealSense T265 is a tracking camera that is designed to be more optimal for Visual Odometry and SLAM (wider field of view and not using infrared light). It can do SLAM onboard as well as loop closure. However, this camera is not able to return RGB images (since it does not have a RGB camera onboard) and the depth returned is not as good as the D400 series (and can be a little trickier to get).

Using both a RealSense D435i sensor and a RealSense T265 sensor can provide both the maps and the better quality visual odometry for developing a full SLAM system. The D435i used for the mapping, and the T265 for the tracking.

Software

RealSense Putting Everything Together to Build a Full System

Intel provides the RealSense SDK2.0 library for using the RealSense cameras. It is Open Source and work on Mac, Windows, Linux, and Android. There are also ROS and OpenCV wrappers.

Click here for the developers page with the SDK.

Within the SDK (software development kit) it includes a viewer to let you view images, record images, change settings, or update the firmware.

Pro Tip 5: Spend some time with the viewer looking at the camera and depth images when designing your system so you can compare various mounting angles, heights, etc.. for the cameras.

The RealSense SDK (software development kit) has a few filters that can run on your computer to try and improve the returned depth map. You can play with turning these on and off in the viewer. Some of these include:

  1. Decimation Filter
  2. Spatial Edge-Preserving Filter
  3. Temporal Filter
  4. Holes Filling Filter

Within ROS there is a realsense-ros package that provides a wrapper for working with the cameras in ROS and lets you view images and other data in RVIZ.

ROS RealSense Occupancy Map package is available as an experimental feature in a separate branch of the RealSense git repo. This uses both the D400 and T265 cameras for creating the map.

For SLAM with just the D435i sensor, see here.

Pro Tip 6: You can use multiple T265 sensors for better accuracy. For example, if one sensor is pointed forward and another backwards; you can use the confidence values from each sensor to feed into a filter.

I know this is starting to sound like a sales pitch

Pro Tip 7: If you have multiple cameras you can connect and query for a serial number to know which cameras is which.
Also if you remove the little connector at the top of the camera you can wire and chain multiple cameras together to synchronize them. (This should work in the SDK 2.0)

See this whitepaper for working with multiple camera configurations. https://simplecore.intel.com/realsensehub/wp-content/uploads/sites/63/Multiple_Camera_WhitePaper04.pdf

Pro Tip 8: Infinite depth points (such as points to close or to far from the sensor) have a depth value of 0. This is good to know for filtering.

Here are two code snippet’s for using the cameras provided by Daniel, to share with you. The first one is for basic displaying of images using python. The second code snippet is using OpenCV to also detect blobs, also using python.

## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

###############################################
##      Open CV and Numpy integration        ##
###############################################

import pyrealsense2 as rs
import numpy as np
import cv2


cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)

# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# Start streaming
pipeline.start(config)

try:
    while True:

        
        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        
        scaled_depth=cv2.convertScaleAbs(depth_image, alpha=0.08)
        depth_colormap = cv2.applyColorMap(scaled_depth, cv2.COLORMAP_JET)

        # Stack both images horizontally
        images = np.hstack((color_image, depth_colormap))

        # Show images
        cv2.imshow('RealSense', images)


        k = cv2.waitKey(1) & 0xFF
        if k == 27:
            break

finally:

    # Stop streaming
    pipeline.stop()
## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

###############################################
##      Open CV and Numpy integration        ##
###############################################

import pyrealsense2 as rs
import numpy as np
import cv2

def nothing(args):
    pass

def detectBlobs(mask):
        # Set up the SimpleBlobdetector with default parameters.
    params = cv2.SimpleBlobDetector_Params()
         
    # Change thresholds
    params.minThreshold = 1;
    params.maxThreshold = 255;
    
    # Filter by Area.
    params.filterByArea = True
    params.maxArea = 4000
    params.minArea = 300
    
    # Filter by Circularity
    params.filterByCircularity = True
    params.minCircularity = 0.1
    
    # Filter by Convexity
    params.filterByConvexity = True
    params.minConvexity = 0.5
    
    # Filter by Inertia
    params.filterByInertia = True
    params.minInertiaRatio = 0.1

         
    detector = cv2.SimpleBlobDetector_create(params)
 
    # Detect blobs.
    reversemask= mask
    keypoints = detector.detect(reversemask)
    im_with_keypoints = cv2.drawKeypoints(mask, keypoints, np.array([]),
            (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
    return im_with_keypoints

def thresholdDepth(depth):
    depth[depth==0] = 255 #set all invalid depth pixels to 255
    threshold_value = cv2.getTrackbarPos('Threshold','Truncated Depth')
    # Zero if dist>TH
    ret,truncated_depth=cv2.threshold(scaled_depth,threshold_value,255,cv2.THRESH_BINARY_INV) 
    return truncated_depth

cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
cv2.namedWindow('Truncated Depth', cv2.WINDOW_AUTOSIZE)
cv2.createTrackbar('Threshold','Truncated Depth',30,255,nothing)

# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# Start streaming
pipeline.start(config)

try:
    while True:

        
        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        scaled_depth=cv2.convertScaleAbs(depth_image, alpha=0.08)
        depth_colormap = cv2.applyColorMap(scaled_depth, cv2.COLORMAP_JET)

        # Stack both images horizontally
        images = np.hstack((color_image, depth_colormap))

        # Show images
        cv2.imshow('RealSense', images)
        truncated_depth=thresholdDepth(scaled_depth)
        truncated_depth=detectBlobs(truncated_depth)
        cv2.imshow('Truncated Depth', truncated_depth)

        k = cv2.waitKey(1) & 0xFF
        if k == 27:
            break

finally:

    # Stop streaming
    pipeline.stop()

I hope you found this informative and can make use of the Pro Tips.

Thank you to Daniel for presenting this information and allowing me to share it. This content is based on his talk. Daniel has also provided the full slide set that can be accessed by clicking here.

Disclaimer: I have not received any funding or free items from Intel.

Liked this article? Take a second to support me on Patreon! This post appeared first on Robots For Roboticists.

Page 235 of 337
1 233 234 235 236 237 337