Page 290 of 521
1 288 289 290 291 292 521

A decade of Open Robotics

March 22nd, 2012 is the day it all began. That’s the day we officially incorporated the Open Source Robotics Foundation, the origin of what we now call Open Robotics. The prospect of starting a company is both scary and exciting; but starting an open-source company in a niche as specialized as robotics, now that is terrifying and exhilarating, if not a little unorthodox. All we had was a dream, some open-source code, and some very smart friends, a whole lot of them.

We also had the wind at our backs. Until March of 2012, Willow Garage had been the stewards of ROS and Gazebo and nurtured it from an idea to a growing community. Willow had always planned to have ROS and Gazebo ‘graduate’ to an entity outside of the Willow Garage; to make any real progress in robotics required a worldwide effort, not that of a single company. The catalyst was the DARPA Robotics Challenge. When OSRF was hired to create and manage the world’s first-ever robotics simulation event, the company was up and running.

Ten years later, here we are, an overnight success a decade in the making! And we couldn’t have done it without the help of the ROS and Gazebo community. On this important day, we wanted to look back on what we’ve accomplished for and with that community, the robotics industry, and the world in general.

A simulated Open Robotics birthday cake.

First and foremost, we’ve made a lot of ROS and Gazebo releases! After ten years we’ve managed to release nine ROS 1 distros, eight ROS 2 distros (about to be nine), eleven Gazebo distros, and coming up on seven Ignition distros. The interest in ROS has grown to the point where we needed tooling to integrate multiple robotic systems together, so we’ve also created a fifth open-source project, Open-RMF.

However, distro releases are only part of the story. Along the way the community has kept pace and has continued to introduce an incredible number of ROS packages to augment the core capabilities of ROS. Looking at just public Github repositories, there are 5852 repositories tagged for ROS and 707 ROS 2 repositories. Not to be outdone, the academic community has cited the original ROS paper, “ROS: an open-source Robot Operating System,” 9451 times according to Google Scholar. Our annual developers meeting, ROSCon, is now entering its tenth year, and we’re happy to report that we’ve had over 8000 attendees, and generous sponsorship from over 150 different organizations.

Looking back over the years, one of the most common themes in our work is friendly competition. We’ve worked with clients all over the globe to create realistic simulations of almost every robotics domain; from factories in NIST ARIAC, to maritime environments in VRX, to disaster sites in the DARPA Robotics Challenge, we’ve seen it all! We recently concluded the DARPA SubT competition, which marks our ninth high-profile competition successfully executed, with two more in the works now. We believe that these friendly competitions between robot developers have been one of the driving forces behind our success.

As we’ve grown, we’ve also seen ROS and Gazebo communities grow along with us. Last year, the ROS Wiki had approximately 2.5 million visitors, and at least one visitor from every single country on the globe! In 2011, the first year of our record keeping, we only served 290,102 ROS binary packages, and in ten years that number has soared to 35,036,199 in 2021! Similarly, in 2011 we had only 4517 unique visitors downloading ROS packages, and today that number has grown to 789,956 unique visitors. That’s 175 times more users in just a decade! What was once just a handful of researchers and students is now a world-wide community of professionals, hobbyists, and academics.

In our ten years, Open Robotics itself has also grown and changed. When we started in 2012 we were just a handful of people working in a small office, today we’re a team of 50 people spread out across the globe, with staff in the US, Singapore, Spain, Germany, and Japan. We’ve mentored dozens of interns, both in-house and through the Google Summer of Code and Outreachy programs, who have gone on to have successful careers and some of whom have even founded their own companies.

A birthday card from one of our youngest ROS users.

It has been a wild ten years and truly humbling to see an untold number of developers and users support and build upon our work. We don’t know what the next ten years will look like; we may reach the moon, or the bottom of the ocean, but we won’t be able to get there without the support of our open-source community. We look forward to the next decade!.

Silicone raspberry used to train harvesting robots

Raspberries are the ultimate summer fruit. Famous for their eye-catching scarlet color and distinctive structure, they consist of dozens of fleshy drupelets with a sweet yet slightly acidic pulp. But this delicate structure is also their primary weakness, as it leaves them vulnerable to even the slightest scratch or bruise. Farmers know all too well that raspberries are a difficult fruit to harvest—and that's reflected in their price tag. But what if robots, equipped with advanced actuators and sensors, could lend a helping hand? Engineers at EPFL's Computational Robot Design & Fabrication (CREATE) lab have set out to tackle this very challenge.

Mining garbage and the circular economy

Over the last century, we have mined more and more raw materials, and manufactured products from those materials. Increased prices for mined raw materials and improved recycling technology have enabled some industries to rely more on recycled materials than ever before. A report by Bob Tita in the Wall Street Journal last week detailed this trend in the aluminum business. According to Tita:

Developing a crowd-friendly robotic wheelchair

Robotic wheelchairs may soon be able to move through crowds smoothly and safely. As part of CrowdBot, an EU-funded project, EPFL researchers are exploring the technical, ethical and safety issues related to this kind of technology. The aim of the project is to eventually help the disabled get around more easily.

Helen Greiner: Solar Powered Robotic Weeding | Sense Think Act Podcast #16

In this episode, Audrow Nash speaks to Helen Greiner, CEO at Tertill, which makes a small solar powered weeding robot for vegetable gardens. The conversation begins with an overview of Helen’s previous robotics experience, including at as a student at MIT, Co-founder at iRobot, Founder and CEO at CyPhyWorks, and in advising government research in robotics, AI, and machine learning. From there, Helen explains the design of the Tertill robot, how it works, and her high hopes for this simple robot: to help reduce the environmental impact of the agriculture industry by helping people to grow their own food. In the last part of the conversation, Helen speaks broadly about her experience in robotics startups, the robotics industry, and the future of robotics.

Episode Links

Podcast info

Handheld surgical robot can help stem fatal blood loss

Matt Johnson (right) and Laura Brattain (left) test a new medical device on an artificial model of human tissue and blood vessels. The device helps users to insert a needle and guidewire quickly and accurately into a vessel, a crucial first step to halting rapid blood loss. Photo: Nicole Fandel.

By Anne McGovern | MIT Lincoln Laboratory

After a traumatic accident, there is a small window of time when medical professionals can apply lifesaving treatment to victims with severe internal bleeding. Delivering this type of care is complex, and key interventions require inserting a needle and catheter into a central blood vessel, through which fluids, medications, or other aids can be given. First responders, such as ambulance emergency medical technicians, are not trained to perform this procedure, so treatment can only be given after the victim is transported to a hospital. In some instances, by the time the victim arrives to receive care, it may already be too late.

A team of researchers at MIT Lincoln Laboratory, led by Laura Brattain and Brian Telfer from the Human Health and Performance Systems Group, together with physicians from the Center for Ultrasound Research and Translation (CURT) at Massachusetts General Hospital, led by Anthony Samir, have developed a solution to this problem. The Artificial Intelligence–Guided Ultrasound Intervention Device (AI-GUIDE) is a handheld platform technology that has the potential to help personnel with simple training to quickly install a catheter into a common femoral vessel, enabling rapid treatment at the point of injury.

“Simplistically, it’s like a highly intelligent stud-finder married to a precision nail gun.” says Matt Johnson, a research team member from the laboratory’s Human Health and Performance Systems Group.

AI-GUIDE is a platform device made of custom-built algorithms and integrated robotics that could pair with most commercial portable ultrasound devices. To operate AI-GUIDE, a user first places it on the patient’s body, near where the thigh meets the abdomen. A simple targeting display guides the user to the correct location and then instructs them to pull a trigger, which precisely inserts the needle into the vessel. The device verifies that the needle has penetrated the blood vessel, and then prompts the user to advance an integrated guidewire, a thin wire inserted into the body to guide a larger instrument, such as a catheter, into a vessel. The user then manually advances a catheter. Once the catheter is securely in the blood vessel, the device withdraws the needle and the user can remove the device.

With the catheter safely inside the vessel, responders can then deliver fluid, medicine, or other interventions.

AI-GUIDE automates nearly every step of the process to locate and insert a needle, guidewire, and catheter into a blood vessel to facilitate lifesaving treatment. The version of the device shown here is optimized to locate the femoral blood vessels, which are in the upper thigh. Image courtesy of the researchers.

As easy as pressing a button

The Lincoln Laboratory team developed the AI in the device by leveraging technology used for real-time object detection in images.

“Using transfer learning, we trained the algorithms on a large dataset of ultrasound scans acquired by our clinical collaborators at MGH,” says Lars Gjesteby, a member of the laboratory’s research team. “The images contain key landmarks of the vascular anatomy, including the common femoral artery and vein.”

These algorithms interpret the visual data coming in from the ultrasound that is paired with AI-GUIDE and then indicate the correct blood vessel location to the user on the display.

“The beauty of the on-device display is that the user never needs to interpret, or even see, the ultrasound imagery,” says Mohit Joshi, the team member who designed the display. “They are simply directed to move the device until a rectangle, representing the target vessel, is in the center of the screen.”

For the user, the device may seem as easy to use as pressing a button to advance a needle, but to ensure rapid and reliable success, a lot is happening behind the scenes. For example, when a patient has lost a large volume of blood and becomes hypotensive, veins that would typically be round and full of blood become flat. When the needle tip reaches the center of the vein, the wall of the vein is likely to “tent” inward, rather than being punctured by the needle. As a result, though the needle was injected to the proper location, it fails to enter the vessel.

To ensure that the needle reliably punctures the vessel, the team engineered the device to be able to check its own work.

“When AI-GUIDE injects the needle toward the center of the vessel, it searches for the presence of blood by creating suction,” says Josh Werblin, the program’s mechanical engineer. “Optics in the device’s handle trigger when blood is present, indicating that the insertion was successful.” This technique is part of why AI-GUIDE has shown very high injection success rates, even in hypotensive scenarios where veins are likely to tent.

Lincoln Laboratory researchers and physicians from the Massachusetts General Hospital Center for Ultrasound Research and Translation collaborated to build the AI-GUIDE system. Photo courtesy of Massachusetts General Hospital.

Recently, the team published a paper in the journal Biosensors that reports on AI-GUIDE’s needle insertion success rates. Users with medical experience ranging from zero to greater than 15 years tested AI-GUIDE on an artificial model of human tissue and blood vessels and one expert user tested it on a series of live, sedated pigs. The team reported that after only two minutes of verbal training, all users of the device on the artificial human tissue were successful in placing a needle, with all but one completing the task in less than one minute. The expert user was also successful in quickly placing both the needle and the integrated guidewire and catheter in about a minute. The needle insertion speed and accuracy were comparable to that of experienced clinicians operating in hospital environments on human patients. 

Theodore Pierce, a radiologist and collaborator from MGH, says AI-GUIDE’s design, which makes it stable and easy to use, directly translates to low training requirements and effective performance. “AI-GUIDE has the potential to be faster, more precise, safer, and require less training than current manual image-guided needle placement procedures,” he says. “The modular design also permits easy adaptation to a variety of clinical scenarios beyond vascular access, including minimally invasive surgery, image-guided biopsy, and imaging-directed cancer therapy.”

In 2021, the team received an R&D 100 Award for AI-GUIDE, recognizing it among the year’s most innovative new technologies available for license or on the market. 

What’s next?

Right now, the team is continuing to test the device and work on fully automating every step of its operation. In particular, they want to automate the guidewire and catheter insertion steps to further reduce risk of user error or potential for infection.

“Retraction of the needle after catheter placement reduces the chance of an inadvertent needle injury, a serious complication in practice which can result in the transmission of diseases such as HIV and hepatitis,” says Pierce. “We hope that a reduction in manual manipulation of procedural components, resulting from complete needle, guidewire, and catheter integration, will reduce the risk of central line infection.”

AI-GUIDE was built and tested within Lincoln Laboratory’s new Virtual Integration Technology Lab (VITL). VITL was built in order to bring a medical device prototyping capability to the laboratory.

“Our vision is to rapidly prototype intelligent medical devices that integrate AI, sensing — particularly portable ultrasound — and miniature robotics to address critical unmet needs for both military and civilian care,” says Laura Brattain, who is the AI-GUIDE project co-lead and also holds a visiting scientist position at MGH. “In working closely with our clinical collaborators, we aim to develop capabilities that can be quickly translated to the clinical setting. We expect that VITL’s role will continue to grow.”

AutonomUS, a startup company founded by AI-GUIDE’s MGH co-inventors, recently secured an option for the intellectual property rights for the device. AutonomUS is actively seeking investors and strategic partners.

“We see the AI-GUIDE platform technology becoming ubiquitous throughout the health-care system,” says Johnson, “enabling faster and more accurate treatment by users with a broad range of expertise, for both pre-hospital emergency interventions and routine image-guided procedures.”

This work was supported by the U.S. Army Combat Casualty Care Research Program and Joint Program Committee – 6. Nancy DeLosa, Forrest Kuhlmann, Jay Gupta, Brian Telfer, David Maurer, Wes Hill, Andres Chamorro, and Allison Cheng provided technical contributions, and Arinc Ozturk, Xiaohong Wang, and Qian Li provided guidance on clinical use.

Page 290 of 521
1 288 289 290 291 292 521