Archive 01.07.2019

Page 19 of 40
1 17 18 19 20 21 40

The RoboBee flies solo

Changes to the Robobee — including an additional pair of wings and improvements to the actuators and transmission ratio — made the vehicle more efficient and allowed the addition of solar cells and an electronics panel. This Robobee is the first to fly without a power cord and is the lightest, untethered vehicle to achieve sustained flight. Credit: Harvard Microrobotics Lab/Harvard SEAS

By Leah Burrows

In the Harvard Microrobotics Lab, on a late afternoon in August, decades of research culminated in a moment of stress as the tiny, groundbreaking Robobee made its first solo flight.

Graduate student Elizabeth Farrell Helbling, Ph.D.’19, and postdoctoral fellow Noah T. Jafferis, Ph.D. from Harvard’s Wyss Institute for Biologically Inspired Engineering, the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and the Graduate School of Arts and Sciences caught the moment on camera.

Helbling, who has worked on the project for six years, counted down: “Three, two, one, go.”

The bright halogens switched on and the solar-powered Robobee launched into the air. For a terrifying second, the tiny robot, still without on-board steering and control, careened towards the lights.

Off camera, Helbling exclaimed and cut the power. The Robobee fell dead out of the air, caught by its Kevlar safety harness.

“That went really close to me,” Helbling said, with a nervous laugh.

“It went up,” Jafferis, who has also worked on the project for about six years, responded excitedly from the high-speed camera monitor where he was recording the test.

And with that, Harvard University’s Robobee reached its latest major milestone — becoming the lightest vehicle ever to achieve sustained untethered flight.

“This is a result several decades in the making,” said Robert Wood, Ph.D., Core Faculty member of the Wyss Institute, the Charles River Professor of Engineering and Applied Sciences at SEAS, and principle investigator of the Robobee project. “Powering flight is something of a Catch-22 as the tradeoff between mass and power becomes extremely problematic at small scales where flight is inherently inefficient.  It doesn’t help that even the smallest commercially available batteries weigh much more than the robot. We have developed strategies to address this challenge by increasing vehicle efficiency, creating extremely lightweight power circuits, and integrating high efficiency solar cells.”

The milestone is described in Nature.

To achieve untethered flight, this latest iteration of the Robobee underwent several important changes, including the addition of a second pair of wings. “The change from two to four wings, along with less visible changes to the actuator and transmission ratio, made the vehicle more efficient, gave it more lift, and allowed us to put everything we need on board without using more power,” said Jafferis. (The addition of the wings also earned this Robobee the nickname X-Wing, after the four-winged starfighters from Star Wars.)

That extra lift, with no additional power requirements, allowed the researchers to cut the power cord — which has kept the Robobee tethered for nearly a decade — and attach solar cells and an electronics panel to the vehicle.

The solar cells, the smallest commercially available, weigh 10 milligrams each and get 0.76 milliwatts per milligram of power when the sun is at full intensity. The Robobee X-Wing needs the power of about three Earth suns to fly, making outdoor flight out of reach for now. Instead, the researchers simulate that level of sunlight in the lab with halogen lights. The solar cells are connected to an electronics panel under the bee, which converts the low voltage signals of the solar array into high voltage drive signals needed to control the actuators. The solar cells sit about three centimeters above the wings, to avoid interference.

In all, the final vehicle, with the solar cells and electronics, weights 259 milligrams (about a quarter of a paper clip) and uses about 120 milliwatts of power, which is less power than it would take to light a single bulb on a string of LED Christmas lights.

“When you see engineering in movies, if something doesn’t work, people hack at it once or twice and suddenly it works. Real science isn’t like that,” said Helbling. “We hacked at this problem in every which way to finally achieve what we did. In the end, it’s pretty thrilling.” The researchers will continue to hack away, aiming to bring down the power and add on-board control to enable the Robobee to fly outside.

To achieve untethered flight, the latest iteration of the Robobee underwent several important changes, including the addition of a second pair of wings. (Video courtesy of the Harvard Microrobotics Lab/Harvard SEAS)

“Over the life of this project we have sequentially developed solutions to challenging problems, like how to build complex devices at millimeter scales, how to create high-performance millimeter-scale artificial muscles, bioinspired designs, and novel sensors, and flight control strategies,” said Wood. “Now that power solutions are emerging, the next step is onboard control. Beyond these robots, we are excited that these underlying technologies are finding applications in other areas such as minimally-invasive surgical devices, wearable sensors, assistive robots, and haptic communication devices – to name just a few.”

Harvard has developed a portfolio of intellectual property (IP) related to the fabrication process for millimeter-scale devices. This IP, as well as related technologies, can be applied to microrobotics, medical devices, consumer electronics and a wide range of complex electromechanical systems. Harvard’s Office of Technology Development is exploring opportunities for commercial impact in these fields.

This research was co-authored by Michael Karpelson, Ph.D., Staff Electrical Engineer on the Institute’s Advanced Technology Team. It was supported by the National Science Foundation and the Office of Naval Research.

Study: Social robots can benefit hospitalized children

A new study by researchers from MIT, Boston Children’s Hospital, and elsewhere shows that a “social robot,” named Huggable (pictured), can be used in support sessions to boost positive emotions in hospitalized children.
Image: Courtesy of the Personal Robots Group, MIT Media Lab

A new study demonstrates, for the first time, that “social robots” used in support sessions held in pediatric units at hospitals can lead to more positive emotions in sick children.

Many hospitals host interventions in pediatric units, where child life specialists will provide clinical interventions to hospitalized children for developmental and coping support. This involves play, preparation, education, and behavioral distraction for both routine medical care, as well as before, during, and after difficult procedures. Traditional interventions include therapeutic medical play and normalizing the environment through activities such as arts and crafts, games, and celebrations.

For the study, published today in the journal Pediatrics, researchers from the MIT Media Lab, Boston Children’s Hospital, and Northeastern University deployed a robotic teddy bear, “Huggable,” across several pediatric units at Boston Children’s Hospital. More than 50 hospitalized children were randomly split into three groups of interventions that involved Huggable, a tablet-based virtual Huggable, or a traditional plush teddy bear. In general, Huggable improved various patient outcomes over those other two options.  

The study primarily demonstrated the feasibility of integrating Huggable into the interventions. But results also indicated that children playing with Huggable experienced more positive emotions overall. They also got out of bed and moved around more, and emotionally connected with the robot, asking it personal questions and inviting it to come back later to meet their families. “Such improved emotional, physical, and verbal outcomes are all positive factors that could contribute to better and faster recovery in hospitalized children,” the researchers write in their study.

Although it is a small study, it is the first to explore social robotics in a real-world inpatient pediatric setting with ill children, the researchers say. Other studies have been conducted in labs, have studied very few children, or were conducted in public settings without any patient identification.

But Huggable is designed only to assist health care specialists — not replace them, the researchers stress. “It’s a companion,” says co-author Cynthia Breazeal, an associate professor of media arts and sciences and founding director of the Personal Robots group. “Our group designs technologies with the mindset that they’re teammates. We don’t just look at the child-robot interaction. It’s about [helping] specialists and parents, because we want technology to support everyone who’s invested in the quality care of a child.”

“Child life staff provide a lot of human interaction to help normalize the hospital experience, but they can’t be with every kid, all the time. Social robots create a more consistent presence throughout the day,” adds first author Deirdre Logan, a pediatric psychologist at Boston Children’s Hospital. “There may also be kids who don’t always want to talk to people, and respond better to having a robotic stuffed animal with them. It’s exciting knowing what types of support we can provide kids who may feel isolated or scared about what they’re going through.”

Joining Breazeal and Logan on the paper are: Sooyeon Jeong, a PhD student in the Personal Robots group; Brianna O’Connell, Duncan Smith-Freedman, and Peter Weinstock, all of Boston Children’s Hospital; and Matthew Goodwin and James Heathers, both of Northeastern University.

Boosting mood

First prototyped in 2006, Huggable is a plush teddy bear with a screen depicting animated eyes. While the eventual goal is to make the robot fully autonomous, it is currently operated remotely by a specialist in the hall outside a child’s room. Through custom software, a specialist can control the robot’s facial expressions and body actions, and direct its gaze. The specialists could also talk through a speaker — with their voice automatically shifted to a higher pitch to sound more childlike — and monitor the participants via camera feed. The tablet-based avatar of the bear had identical gestures and was also remotely operated.

During the interventions involving Huggable — involving kids ages 3 to 10 years — a specialist would sing nursery rhymes to younger children through robot and move the arms during the song. Older kids would play the I Spy game, where they have to guess an object in the room described by the specialist through Huggable.  

Through self-reports and questionnaires, the researchers recorded how much the patients and families liked interacting with Huggable. Additional questionnaires assessed patient’s positive moods, as well as anxiety and perceived pain levels. The researchers also used cameras mounted in the child’s room to capture and analyze speech patterns, characterizing them as joyful or sad, using software.

A greater percentage of children and their parents reported that the children enjoyed playing with Huggable more than with the avatar or traditional teddy bear. Speech analysis backed up that result, detecting significantly more joyful expressions among the children during robotic interventions. Additionally, parents noted lower levels of perceived pain among their children.

The researchers noted that 93 percent of patients completed the Huggable-based interventions, and found few barriers to practical implementation, as determined by comments from the specialists.

A previous paper based on the same study found that the robot also seemed to facilitate greater family involvement in the interventions, compared to the other two methods, which improved the intervention overall. “Those are findings we didn’t necessarily expect in the beginning,” says Jeong, also a co-author on the previous paper. “We didn’t tell family to join any of the play sessions — it just happened naturally. When the robot came in, the child and robot and parents all interacted more, playing games or in introducing the robot.”

An automated, take-home bot

The study also generated valuable insights for developing a fully autonomous Huggable robot, which is the researchers’ ultimate goal. They were able to determine which physical gestures are used most and least often, and which features specialists may want for future iterations. Huggable, for instance, could introduce doctors before they enter a child’s room or learn a child’s interests and share that information with specialists. The researchers may also equip the robot with computer vision, so it can detect certain objects in a room to talk about those with children.

“In these early studies, we capture data … to wrap our heads around an authentic use-case scenario where, if the bear was automated, what does it need to do to provide high-quality standard of care,” Breazeal says.

In the future, that automated robot could be used to improve continuity of care. A child would take home a robot after a hospital visit to further support engagement, adherence to care regimens, and monitoring well-being.

“We want to continue thinking about how robots can become part of the whole clinical team and help everyone,” Jeong says. “When the robot goes home, we want to see the robot monitor a child’s progress. … If there’s something clinicians need to know earlier, the robot can let the clinicians know, so [they’re not] surprised at the next appointment that the child hasn’t been doing well.”

Next, the researchers are hoping to zero in on which specific patient populations may benefit the most from the Huggable interventions. “We want to find the sweet spot for the children who need this type of of extra support,” Logan says.

Safe, low-cost, modular, self-programming robots

Many work processes would be almost unthinkable today without robots. But robots operating in manufacturing facilities have often posed risks to workers because they are not responsive enough to their surroundings. To make it easier for people and robots to work in close proximity in the future, Prof. Matthias Althoff of the Technical University of Munich (TUM) has developed a new system: IMPROV.

Wearable robot ‘WalkON Suit’ off to Cybathlon 2020

Standing upright and walking alone are very simple but noble motions that separate humans from many other creatures. Wearable and prosthetic technologies have emerged to augment human function in locomotion and manipulation. However, advances in wearable robot technology have been especially momentous to Byoung-Wook Kim, a triplegic for 22 years following a devastating car accident.

Sandia’s crawling robots, drones detect damage to save wind blades

Drones and crawling robots outfitted with special scanning technology could help wind blades stay in service longer, which may help lower the cost of wind energy at a time when blades are getting bigger, pricier and harder to transport, Sandia National Laboratories researchers say.

Robots may care for you in old age—and your children will teach them

It's likely that before too long, robots will be in the home to care for older people and help them live independently. To do that, they'll need to learn how to do all the little jobs that we might be able to do without thinking. Many modern AI systems are trained to perform specific tasks by analysing thousands of annotated images of the action being performed. While these techniques are helping to solve increasingly complex problems, they still focus on very specific tasks and require lots of time and processing power to train.

#289: On Design in Human-Robot Interaction, with Bilge Mutlu


In this episode, Audrow Nash interviews Bilge Mutlu, Associate Professor at the University of Wisconsin–Madison, about design-thinking in human-robot interaction. Professor Mutlu discusses design-thinking at a high-level, how design relates to science, and he speaks about the main areas of his work: the design space, the evaluation space, and how features are used within a context. He also gives advice on how to apply a design-oriented mindset.

Bilge Mutlu

Bilge Mutlu is an Associate Professor of Computer Science, Psychology, and Industrial Engineering at the University of Wisconsin–Madison. He directs the Wisconsin HCI Laboratory and organizes the WHCI+D Group. He received his PhD degree from Carnegie Mellon University‘s Human-Computer Interaction Institute.

Links

The world’s smallest autonomous racing drone

Racing team 2018-2019: Christophe De Wagter, Guido de Croon, Shuo Li, Phillipp Dürnay, Jiahao Lin, Simon Spronk

Autonomous drone racing
Drone racing is becoming a major e-sports. Enthusiasts – and now also professionals – transform drones into seriously fast racing platforms. Expert drone racers can reach speeds up to 190 km/h. They fly by looking at a first-person view (FPV) of their drone, which has a camera transmitting images mounted on the front.

In recent years, the advance in areas such as artificial intelligence, computer vision, and control has raised the question whether drones would not be able to fly faster than humans. The advantage for the drone could be that it can sense much more than the human pilot (like accelerations and rotation rates with its inertial sensors) and process all image data quicker on board of the drone. Moreover, its intelligence could be shaped purely for only one goal: racing as fast as possible.

In the quest for a fast-flying, autonomous racing drone, multiple autonomous racing drone competitions have been organized in the academic community. These “IROS” drone races (where IROS stands for one of the most well-known world-wide robotics conferences) have been held from 2016 on. Over these years, the speed of the drones has been gradually improving, with the faster drones in the competition now moving at ~2 m/s.

Smaller
Most of the autonomous racing drones are equipped with high-performance processors, with multiple, high-quality cameras and sometimes even with laser scanners. This allows these drones to use state-of-the-art solutions to visual perception, like building maps of the environment or tracking accurately how the drone is moving over time. However, it also makes the drones relatively heavy and expensive.

At the Micro Air Vehicle laboratory (MAVLab) of TU Delft, we have as aim to make light-weight and cheap autonomous racing drones. Such drones could be used by many drone racing enthusiasts to train with or fly against. If the drone becomes small enough, it could even be used for racing at home. Aiming for “small” means serious limitations to the sensors and processing that can be carried onboard. This is why in the IROS drone races we have always focused on monocular vision (a single camera) and on software algorithms for vision, state estimation, and control that are computationally highly efficient.


With its 72 grams and 10 cm diameter, the modified “Trashcan” drone is currently the smallest autonomous racing drone in the world. In the background, Shuo Li, PhD student at working on autonomous drone racing at the MAVLab.

A 72-gram autonomous racing drone
Here, we report on how we made a tiny autonomous racing drone fly through a racing track with on average 2 m/s, which is competitive with other, larger state-of-the-art autonomous racing drones.

The drone, which is a modified Eachine “Trashcan”, is 10 cm in diameter and weighs 72 grams. This weight includes a 17-gram JeVois smart-camera, which consists of a single, rolling shutter CMOS camera, a 4-core ARM v7 1.34 GHz processor with 256 MB RAM, and a 2-core Mali GPU. Although limited compared to the processors used on other drones, we consider it as more than powerful enough: With the algorithms we explain below, the drone actually only uses a single CPU core. The JeVois camera communicates with a 4.2gram Crazybee F4 Pro Flight Controller running Paparazzi autopilot, via the MAVLink communication protocol. Both the JeVois code and Paparazzi code is open source and available to the community.

An important characteristic of our approach to drone racing is that we do not rely on accurate, but computationally expensive methods for visual Simultaneous Localization And Mapping (SLAM) or Visual Inertial Odometry (VIO). Instead, we focus on having the drone predict its motion as good as possible with an efficient prediction model and correct any drift of the model with vision-based gate detections.

Prediction
A typical prediction model would involve the integration of the accelerometer readings. However, on small drones the Inertial Measurement Unit (IMU) is subject to a lot of vibration, leading to noisier accelerometer readings. Integrating such noisy measurements quickly leads to an enormous drift in both the velocity and position estimates of the drone. Therefore, we have opted for a simpler solution, in which the IMU is only used to determine the attitude of the drone. This attitude can then be used to predict the forward acceleration, as illustrated in the figure below. If one assumes the drone to fly at a constant height, the force in the z-direction has to equal the gravity force. Given a specific pitch angle, this relation leads to a specific forward force due to the thrust. The prediction model then updates the velocity based on this predicted forward force and the expected drag force given the estimated velocity.

Prediction model for the tiny drone. The drone has an estimate of its attitude, including the pitch angle (ѳ). Assuming the drone to fly at a constant height, the force straight up (Tz) should equal gravity (g). Together, these two pieces allow us to calculate the thrust force that should be delivered by the drone’s four propellers (T), and, consequently also the force that is exerted forwards (Tx). The model uses this forward force, and resulting (backward) drag force (Dx), to update the velocity (vx) and position of the drone when not seeing gates.

Vision-based corrections
The prediction model is corrected with the help of vision-based position measurements. First, a snake-gate algorithm is used to detect the colored gate in the image. This algorithm is extremely efficient, as it only processes a small portion of the image’s pixels. It samples random image locations and when it finds the right color, it starts following it around to determine the shape. After a detection, the known size of the gate is used to determine the drone’s relative position to the gate (see the figure below). This is a standard perspective-N-point problem. The output of this process is a relative position to a gate. Subsequently, we figure out which gate on the racing track is most likely in our view, and transform the relative position to the gate to a global position measurement. Since our vision process often outputs quite precise position estimates but sometimes also produces significant outliers, we do not use a Kalman filter but a Moving Horizon Estimator for the state estimation. This leads to much more robust position and velocity estimates in the presence of outliers.

The gates and their sizes are known. When the drone detects a gate, it can use this knowledge to calculate its relative position to a gate. The global layout of the track and current supposed position of the drone are used to determine which gate the drone is most likely looking at. This way, the relative position can be transformed to a global position estimate.

Racing performance and future steps
The drone used the newly developed algorithms to race along a 4-gate race track in TU Delft’s Cyberzoo. It can fly multiple laps at an average speed of 2 m/s, which is competitive with larger, state-of-the-art autonomous racing drones (see the video at the top). Thanks to the central role of gate detections in the drone’s algorithms, the drone can cope with moderate displacements of the gates.
Possible future directions of research are to make the drone smaller and fly faster. In principle, being small is an advantage, since the gates are relatively bigger. This allows the drone to choose its trajectory more freely than a big drone, which may allow for faster trajectories. In order to better exploit this characteristic, we would have to fit optimal control algorithms into the onboard processing. Moreover, we want to make the vision algorithms more robust – as the current color-based snake gate algorithm is quite dependent on lighting conditions. An obvious option here is to start using deep neural networks, which would have to fit within the dual-core Mali GPU on the JeVois.

Arxiv article: Visual Model-predictive Localization for Computationally Efficient Autonomous Racing of a 72-gram Drone, Shuo Li, Erik van der Horst, Philipp Duernay, Christophe De Wagter, Guido C.H.E. de Croon.

The little robot that could

Root is controlled using an iPad app that has three different levels of coding, allowing students as young as four years old to learn the fundamentals of programming. Credit: Wyss Institute at Harvard University

iRobot Corp. announced its acquisition of Root Robotics, Inc., whose educational Root coding robot got its start as a summer research project at the Wyss Institute for Biologically Inspired Engineering in 2011 and subsequently developed into a robust learning tool that is being used in over 500 schools to teach children between the ages of four and twelve how to code in an engaging, intuitive way. iRobot plans to incorporate the Root robot into its growing portfolio of educational robot products, and continue the work of scaling up production and expanding Root’s programming content that began when Root Robotics was founded by former Wyss Institute members in 2017.

The Root robot can be programmed to perform a variety of actions based on what students draw on a whiteboard, including avoid obstacles, play music, and flash its lights. Credit: Wyss Institute at Harvard University

“We’re honored that we got to see a Wyss Institute technology go from its earliest stages to where we are today, with the opportunity to make a gigantic impact on the world,” said Zivthan Dubrovsky, former Bioinspired Robotics Platform Lead at the Wyss Institute and co-founder of Root Robotics who is now the General Manager of Educational Robots at iRobot. “We’re excited to see how this new chapter in Root’s story can further amplify our mission of making STEM education accessible to students of any age in any classroom around the world.”

Root began in the lab of Wyss Core Faculty Member and Bioinspired Robotics Platform co-lead Radhika Nagpal, Ph.D., who was investigating the idea of robots that could climb metal structures using magnetic wheels. “Most whiteboards in classrooms are backed with metal, so I thought it would be wonderful if a robot could automatically erase the whiteboard as I was teaching – ironically, we referred to it as a ‘Roomba® for whiteboards,’ because many aspects were directly inspired by iRobot’s Roomba at the time,” said Nagpal, who is also the Fred Kavli Professor of Computer Science at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS). “Once we had a working prototype, the educational potential of this robot was immediately obvious. If it could be programmed to detect ink, navigate to it, and erase it, then it could be used to teach students about coding algorithms of increasing complexity.”

That prototype was largely built by Raphael Cherney, first as a Research Engineer in Nagpal’s group at Harvard in 2011, and then beginning in 2013 when he was hired to work on developing Root full-time along with Dubrovsky and other members of the Wyss Institute. “When Raphael and Radhika pitched me the idea of Root, I fell in love with it immediately,” said Dubrovsky. “My three daughters were all very young at the time and I wanted them to have exposure to STEM concepts like coding and engineering, but I was frustrated by the lack of educational systems that were designed for children their age. The idea of being able to create that for them was really what motivated me to throw all my weight behind the project.”

Under Cherney and Dubrovsky’s leadership, Root’s repertoire expanded to include drawing shapes on the whiteboard as it wheeled around, navigating through obstacles drawn on the whiteboard, playing music, and more. The team also developed Root’s coding interface, which has three levels of increasing complexity that are designed to help students from preschool to high school easily grasp the concepts of programming and use them to create their own projects. “The tangible nature of a robot really brings the code to life, because the robot is ‘real’ in a way that code isn’t – you can watch it physically carrying out the instructions that you’ve programmed into it,” said Cherney, who co-founded Root Robotics and is now a Principal Systems Engineer at iRobot. “It helps turn coding into a social activity, especially for kids, as they learn to work in teams and see coding as a fun and natural thing to do.”

Over the next three years the team iterated on Root’s prototype and began testing it in classrooms in and around Boston, getting feedback from students and teachers to get the robot closer to its production-ready form. “Robots are very hard to build, and the support we had from the Wyss Institute let us do it right, instead of just fast,” said Cherney. “We were able to develop Root from a prototype to a product that worked in schools and was doing what we envisioned, and the whole process was much smoother than it would have been if we had just been a team working in a garage.”

By 2016, they felt ready for commercialization. They ran a Kickstarter® campaign as a market test to see if they had a viable consumer business, and raised nearly $400,000 from almost 2,000 backers, far exceeding their target of $250,000. Buoyed by this vote of confidence from potential customers, Dubrovsky and Cherney left the Wyss Institute in the summer of 2017 to co-found Root Robotics with Nagpal serving as Scientific Advisor and $2.5 million in seed funds, and a license from Harvard’s Office of Technology Development. While most of their time at the Wyss Institute was spent getting the robot right, the company focused on getting the content of Root’s programming app up to par, setting up a classroom in their office and inviting students to come try out the robot, then updating their content with insights learned from those experiences.

Once they achieved their vision for three different levels of programming targeting students of different ages, they shipped Root robots to their Kickstarter backers and made it available for purchase on their website in September 2018. Since then, over a million coding projects have been run on the Root app. “What’s been most rewarding for me personally is seeing my kids take Root to their classrooms and show their teachers and their peers what they’ve been able to make a robot do. Getting to see them problem-solve and iterate and then achieve something they’re proud of is priceless,” said Dubrovsky. “I’ve been pleasantly surprised by seeing people come up with new things to do with the robot that we never thought of,” added Cherney. “The way it seems to immediately unlock creativity is beautiful and inspiring.”

The Root robot has tremendous value as a tool for teaching students not only coding, but also concepts of AI, engineering and autonomous robots, all of which are very important for our future.

Colin Angle, iRobot
Root Robotics co-founders Raphael Cherney (left) and Zee Dubrovsky (center) are joining iRobot’s Educational Robotics division. Root started as a project in the lab of Wyss Faculty member Radhika Nagpal (right). Credit: Wyss Institute at Harvard University

“One of the things that really attracted us to Root was that it was designed as an education product from the ground up, which fits perfectly with our own deep passion for using robots as a way of turbo charging STEM education,” said Colin Angle, chairman and CEO of iRobot. “The Root robot has tremendous value as a tool for teaching students not only coding, but also concepts of AI, engineering and autonomous robots, all of which are very important for our future.”

Nagpal is still sometimes floored by the fact that what started as an idea for a simple whiteboard-erasing robot ended up developing into such a robust teaching tool. “Without the Wyss Institute, I would not have even thought to try and commercialize this idea,” she said. “It supported an amazing team of engineers in creating and testing Root over several years, which allowed us to be able to raise the funds to launch the company with a product that was so well-developed that it now has the potential to really scale up and make a big difference in the world.”

“Root Robotics is one of the great success stories to come out of the Wyss Institute, partially because of how quickly the team recognized its potential impact and focused on de-risking it both technically and commercially,” said Wyss Founding Director Donald Ingber, M.D., Ph.D. “It was fantastic to see Root take root at the Institute, and we are immensely proud of them and their ability to develop a technology that can truly bring about positive change in our world by targeting children who are the creators and visionaries of tomorrow.” Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at SEAS.

Page 19 of 40
1 17 18 19 20 21 40