Page 376 of 522
1 374 375 376 377 378 522

#319: Micro-scale Surgical Robots, with Eric Diller


In this episode, Audrow Nash interviews Eric Diller, Assistant Professor at the University of Toronto, on wireless micro-scale robots that could eventually be used in human surgery.  Diller speaks about the design, control, and manufacture of micro-scale surgical robotic devices, as well as when we might see this technology in the operating room.

Eric Diller

Dr. Diller received his B.S. and M.S. in Mechanical Engineering at Case Western Reserve University, and Ph.D. at Carnegie Mellon University in 2013. His work is enabling a new approach to non-invasive medical procedures, micro-factories and scientific tools. He does this by shrinking the mechanical and electrical components of robots to centimeter, millimeter or even micrometer size. He uses magnetic fields and other smart-material actuation methods to make mobile functional devices. Dr. Diller envisions a future where drug delivery and surgery can be done in a fast, painless and focused way, and where new materials and devices can be manufactured using swarms of tiny gripping, cutting, and sensing wireless robots.

Dr. Diller has received the MIE Early Career Teaching Award, the UofT Connaught New Researcher Award, the Ontario Early Researcher Award, and the I.W. Smith Award from the Canadian Society for Mechanical Engineers.

Links

A three-agent robotic system for Mars exploration

Mars, also known as the red planet, has been the focus of numerous research studies, as some of its characteristics have sparked discussions about its possible inhabitability. The National Aeronautics and Space Administration (NASA) and a few other space agencies have thus sent a number of rovers and other spacecraft to Mars with the hope of better understanding its geology and environment.

Top 5 ways to make better AI with less data

1. Transfer Learning

Transfer learning is used a lot in machine learning now since the benefits are big. The general idea is simple. You train a big neural network for purposes with a lot of data and a lot of training. When you then have a specific problem you sort of “cut the end off” the big network and train a few new layers with your own data. The big network already understands a lot of general patterns that you with transfer learning don’t have to teach the network this time. 

A good example is if you try to train a network to recognize images of different dog species. Without transfer learning you need a lot of data, maybe a 100.000 images of different dog species since the network has to learn everything from scratch. If you train a new model with transfer learning you might only need 50 images of every species.

You can read more about Transfer Learning here.

2. Active learning

Active learning is a data collection strategy that enables you to pick the data that your AI models would benefit the most from when training. Let’s stick with the dog species example. You have trained a model that can differentiate between different species but for some reason the model always has trouble identifying the german shepherds. With an active learning strategy you would automatically or at least with an established process pick out these images and send them for labelling. 

I made a longer post about how active learning works here.

3. Better data

I’ve put in a strategy here that might sound obvious but is sometimes overlooked. With better quality data you often need way less data since the AI does not have to train through the same amount of noise and wrong signals. In the media AI is often talked about as “with a lot of data you can do anything”. But in many cases making an extra effort to get rid of bad data and make sure that only correctly labeled data is used for training, makes more sense than going for more data.

4. GAN’s

GAN’s or Generative Adversarial Networks is a way to build neural networks that sounds almost futuristic in it’s design. Basically this kind of neural network is built by having two networks compete against each other in a game where one network creates new fake training data examples from the data set and the other is trying to guess what is fake and what is real data. The network building fake data is called the generator and the network trying to guess what is fake and what is real is called the discriminator. This is a deep learning approach and both networks keep improving during the game. When the generator is so good at generating fake data that the discriminator is consistently having problems separating fake from real we have a finished model. 

For GAN’s you still need a lot of data but you don’t need as labelled data and since it’s usually the labelling that is the costly part you can save time and on your data with this approach.

5. Probabilistic Programming

One of my very favorite technologies. Probabilistic programming has a lot of benefits and one of them is that you can often get away with using less data. The reason is simply that you build “priors” into your models. That means that you can code your domain knowledge into the model and let data take it from there. In more many other machine learning approaches everything has to be learned by the model from scratch no matter how obvious it is. 

A good example here is document data capture models. In many cases the data we are looking for is obvious by the keyword to the left of it. Like “ID number: #number# is a common format. With probabilistic programming you can tell the model before training that you expect the data to be to the right of the keyword. Many neural networks are taught from scratch requiring more data.

You can also read more about probabilistic programming here: https://www.danrose.ai/blog/63qy8s3vwq8p9mogsblddlti4yojon

Japanese grocery chain testing remotely controlled robot stockers

Japanese grocery chain FamilyMart has teamed up with Tokyo startup Telexistence to test the idea of using a remotely controlled shelf stocking robot named the Model-T to restock grocery shelves. On its website, Telexistence describes the robot as a means for addressing labor shortages in Japan and also as a way to improve social distancing during the pandemic.

PufferBot: A flying robot with an expandable body

Researchers at University of Colorado Boulder's ATLAS Institute and University of Calgary have recently developed an actuated, expandable structure that can be used to fabricate shape-changing aerial robots. In a paper pre-published on arXiv, they introduced a new robot, dubbed PufferBot, which was built using this unique and innovative structure.

PufferBot: A flying robot with an expandable body

Researchers at University of Colorado Boulder's ATLAS Institute and University of Calgary have recently developed an actuated, expandable structure that can be used to fabricate shape-changing aerial robots. In a paper pre-published on arXiv, they introduced a new robot, dubbed PufferBot, which was built using this unique and innovative structure.

Covid-19 – the accelerator to smart logistics, smart transportation and smart supply chain services

Clem Robertson, CEO and Founder of R4DAR Technologies explains how the global pandemic has fast-forwarded demand for automated transport and last mile delivery systems so life can return to the “new normal” and the country can be better prepared for future pandemics.

Robot takes contact-free measurements of patients’ vital signs

By Anne Trafton

During the current coronavirus pandemic, one of the riskiest parts of a health care worker’s job is assessing people who have symptoms of Covid-19. Researchers from MIT, Boston Dynamics, and Brigham and Women’s Hospital hope to reduce that risk by using robots to remotely measure patients’ vital signs.

The robots, which are controlled by a handheld device, can also carry a tablet that allows doctors to ask patients about their symptoms without being in the same room.

“In robotics, one of our goals is to use automation and robotic technology to remove people from dangerous jobs,” says Henwei Huang, an MIT postdoc. “We thought it should be possible for us to use a robot to remove the health care worker from the risk of directly exposing themselves to the patient.”

Using four cameras mounted on a dog-like robot developed by Boston Dynamics, the researchers have shown that they can measure skin temperature, breathing rate, pulse rate, and blood oxygen saturation in healthy patients, from a distance of 2 meters. They are now making plans to test it in patients with Covid-19 symptoms.

“We are thrilled to have forged this industry-academia partnership in which scientists with engineering and robotics expertise worked with clinical teams at the hospital to bring sophisticated technologies to the bedside,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

The researchers have posted a paper on their system on the preprint server techRxiv, and have submitted it to a peer-reviewed journal. Huang is one of the lead authors of the study, along with Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital, and Claas Ehmke, a visiting scholar from ETH Zurich.

Measuring vital signs

When Covid-19 cases began surging in Boston in March, many hospitals, including Brigham and Women’s, set up triage tents outside their emergency departments to evaluate people with Covid-19 symptoms. One major component of this initial evaluation is measuring vital signs, including body temperature.

The MIT and BWH researchers came up with the idea to use robotics to enable contactless monitoring of vital signs, to allow health care workers to minimize their exposure to potentially infectious patients. They decided to use existing computer vision technologies that can measure temperature, breathing rate, pulse, and blood oxygen saturation, and worked to make them mobile.

To achieve that, they used a robot known as Spot, which can walk on four legs, similarly to a dog. Health care workers can maneuver the robot to wherever patients are sitting, using a handheld controller. The researchers mounted four different cameras onto the robot — an infrared camera plus three monochrome cameras that filter different wavelengths of light.

The researchers developed algorithms that allow them to use the infrared camera to measure both elevated skin temperature and breathing rate. For body temperature, the camera measures skin temperature on the face, and the algorithm correlates that temperature with core body temperature. The algorithm also takes into account the ambient temperature and the distance between the camera and the patient, so that measurements can be taken from different distances, under different weather conditions, and still be accurate.

Measurements from the infrared camera can also be used to calculate the patient’s breathing rate. As the patient breathes in and out, wearing a mask, their breath changes the temperature of the mask. Measuring this temperature change allows the researchers to calculate how rapidly the patient is breathing.

The three monochrome cameras each filter a different wavelength of light — 670, 810, and 880 nanometers. These wavelengths allow the researchers to measure the slight color changes that result when hemoglobin in blood cells binds to oxygen and flows through blood vessels. The researchers’ algorithm uses these measurements to calculate both pulse rate and blood oxygen saturation.

“We didn’t really develop new technology to do the measurements,” Huang says. “What we did is integrate them together very specifically for the Covid application, to analyze different vital signs at the same time.”

Continuous monitoring

In this study, the researchers performed the measurements on healthy volunteers, and they are now making plans to test their robotic approach in people who are showing symptoms of Covid-19, in a hospital emergency department.

While in the near term, the researchers plan to focus on triage applications, in the longer term, they envision that the robots could be deployed in patients’ hospital rooms. This would allow the robots to continuously monitor patients and also allow doctors to check on them, via tablet, without having to enter the room. Both applications would require approval from the U.S. Food and Drug Administration.

The research was funded by the MIT Department of Mechanical Engineering and the Karl van Tassel (1925) Career Development Professorship, and Boston Dynamics.

Page 376 of 522
1 374 375 376 377 378 522