Category Robotics Classification

Page 336 of 455
1 334 335 336 337 338 455

Deep learning in medical imaging – 3D medical image segmentation with PyTorch

The basic MRI foundations are presented for tensor representation, as well as the basic components to apply a deep learning method that handles the task-specific problems(class imbalance, limited data). Moreover, we present some features of the open source medical image segmentation library. Finally, we discuss our preliminary experimental results and provide sources to find medical imaging data.

Clayton & McKervey offers tips for business survival during COVID-19, including utilization of “rolling 13-week cash flow analysis”

The realities presented by the COVID-19 health crisis have businesses focused on survival. The most important step to take after ensuring the safety of the team is to protect the company’s cash position to the degree that’s possible.

#306: Microlocation, with David Mindell

In this episode, Abate interviews David Mindell, co-founder of Humatics. David discusses a system they developed that can detect the location of a special tracking device down to a centimeter level accuracy. They are currently developing a device to detect location down to a millimeter level accuracy. This solves a the core problem of localization for robots. David discusses the technology behind these products and their applications.

David Mindell


David co-founded Humatics with a mission to revolutionize how people and machines locate, navigate and collaborate. He is a professor of Aeronautics and Astronautics at MIT, as well as the Dibner Professor of the History of Engineering and Manufacturing, and Chair of the MIT Task Force on the Work of the Future. He has participated in more than 25 oceanographic expeditions and is an inventor on 8 patents in autonomous helicopters, AI-assisted piloting, and RF navigation. He is the author of five books, including Our Robots, Ourselves:Robotics and the Myths of Autonomy (2015) and Digital Apollo: Human and Machine in Spaceflight (2008). David has undergraduate degrees from Yale and a Ph.D. from MIT.

Links

COVID-19, robots and us – weekly online discussion

Silicon Valley Robotics and the CITRIS People and Robots Initiative are hosting a weekly “COVID-19, robots and us” online discussion with experts from the robotics and health community on Tuesdays at 7pm (California time – PDT). You can sign-up for the free event here.

Guest speakers this week are:

Prof Ken Goldberg, UC Berkeley Director of the CITRIS People and Robots Initiative.

Alder Riley, Founder at ideastostuff and a coordinator at Helpful Engineering. Helpful Engineering is a rapidly growing global network created to design, source and execute projects that can help people suffering from the COVID-19 crisis worldwide.

Tra Vu, COO at Ohmnilabs, a telepresence robotics and 3D printing startup

Mark Martin, Regional Director Advanced Manufacturing Workforce Development California Community Colleges

Gui Cavalcanti, CEO/Cofounder of Breeze Automation and Founder of Open Source Covid-19 Medical Supplies Group. The Open Source COVID-19 Medical Supplies Group is a rapidly growing Facebook group formed to evaluate, design, validate, and source the fabrication of open source emergency medical supplies around the world, given a variety of local supply conditions.

Andra Keay, Managing Director of Silicon Valley Robotics and Visiting Scholar at CITRIS People and Robots Initiative will act as moderator.

Beau Ambur, Outreach, Design and Technology Lead for Kickstarter will be coordinating technology for us.

System trains driverless cars in simulation before they hit the road


A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a host of worse-case scenarios before cruising down real streets.

By Rob Matheson

A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a host of worse-case scenarios before cruising down real streets.  

Control systems, or “controllers,” for autonomous vehicles largely rely on real-world datasets of driving trajectories from human drivers. From these data, they learn how to emulate safe steering controls in a variety of situations. But real-world data from hazardous “edge cases,” such as nearly crashing or being forced off the road or into other lanes, are — fortunately — rare.

Some computer programs, called “simulation engines,” aim to imitate these situations by rendering detailed virtual roads to help train the controllers to recover. But the learned control from simulation has never been shown to transfer to reality on a full-scale vehicle.

The MIT researchers tackle the problem with their photorealistic simulator, called Virtual Image Synthesis and Transformation for Autonomy (VISTA). It uses only a small dataset, captured by humans driving on a road, to synthesize a practically infinite number of new viewpoints from trajectories that the vehicle could take in the real world. The controller is rewarded for the distance it travels without crashing, so it must learn by itself how to reach a destination safely. In doing so, the vehicle learns to safely navigate any situation it encounters, including regaining control after swerving between lanes or recovering from near-crashes.  

In tests, a controller trained within the VISTA simulator safely was able to be safely deployed onto a full-scale driverless car and to navigate through previously unseen streets. In positioning the car at off-road orientations that mimicked various near-crash situations, the controller was also able to successfully recover the car back into a safe driving trajectory within a few seconds. A paper describing the system has been published in IEEE Robotics and Automation Letters and will be presented at the upcoming ICRA conference in May.

“It’s tough to collect data in these edge cases that humans don’t experience on the road,” says first author Alexander Amini, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “In our simulation, however, control systems can experience those situations, learn for themselves to recover from them, and remain robust when deployed onto vehicles in the real world.”

The work was done in collaboration with the Toyota Research Institute. Joining Amini on the paper are Igor Gilitschenski, a postdoc in CSAIL; Jacob Phillips, Julia Moseyko, and Rohan Banerjee, all undergraduates in CSAIL and the Department of Electrical Engineering and Computer Science; Sertac Karaman, an associate professor of aeronautics and astronautics; and Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science.

Data-driven simulation

Historically, building simulation engines for training and testing autonomous vehicles has been largely a manual task. Companies and universities often employ teams of artists and engineers to sketch virtual environments, with accurate road markings, lanes, and even detailed leaves on trees. Some engines may also incorporate the physics of a car’s interaction with its environment, based on complex mathematical models.

But since there are so many different things to consider in complex real-world environments, it’s practically impossible to incorporate everything into the simulator. For that reason, there’s usually a mismatch between what controllers learn in simulation and how they operate in the real world.

Instead, the MIT researchers created what they call a “data-driven” simulation engine that synthesizes, from real data, new trajectories consistent with road appearance, as well as the distance and motion of all objects in the scene.

They first collect video data from a human driving down a few roads and feed that into the engine. For each frame, the engine projects every pixel into a type of 3D point cloud. Then, they place a virtual vehicle inside that world. When the vehicle makes a steering command, the engine synthesizes a new trajectory through the point cloud, based on the steering curve and the vehicle’s orientation and velocity.

Then, the engine uses that new trajectory to render a photorealistic scene. To do so, it uses a convolutional neural network — commonly used for image-processing tasks — to estimate a depth map, which contains information relating to the distance of objects from the controller’s viewpoint. It then combines the depth map with a technique that estimates the camera’s orientation within a 3D scene. That all helps pinpoint the vehicle’s location and relative distance from everything within the virtual simulator.

Based on that information, it reorients the original pixels to recreate a 3D representation of the world from the vehicle’s new viewpoint. It also tracks the motion of the pixels to capture the movement of the cars and people, and other moving objects, in the scene. “This is equivalent to providing the vehicle with an infinite number of possible trajectories,” Rus says. “Because when we collect physical data, we get data from the specific trajectory the car will follow. But we can modify that trajectory to cover all possible ways of and environments of driving. That’s really powerful.”

Reinforcement learning from scratch

Traditionally, researchers have been training autonomous vehicles by either following human defined rules of driving or by trying to imitate human drivers. But the researchers make their controller learn entirely from scratch under an “end-to-end” framework, meaning it takes as input only raw sensor data — such as visual observations of the road — and, from that data, predicts steering commands at outputs.

“We basically say, ‘Here’s an environment. You can do whatever you want. Just don’t crash into vehicles, and stay inside the lanes,’” Amini says.

This requires “reinforcement learning” (RL), a trial-and-error machine-learning technique that provides feedback signals whenever the car makes an error. In the researchers’ simulation engine, the controller begins by knowing nothing about how  to drive, what a lane marker is, or even other vehicles look like, so it starts executing random steering angles. It gets a feedback signal only when it crashes. At that point, it gets teleported to a new simulated location and has to execute a better set of steering angles to avoid crashing again. Over 10 to 15 hours of training, it uses these sparse feedback signals to learn to travel greater and greater distances without crashing.

After successfully driving 10,000 kilometers in simulation, the authors apply that learned controller onto their full-scale autonomous vehicle in the real world. The researchers say this is the first time a controller trained using end-to-end reinforcement learning in simulation has successful been deployed onto a full-scale autonomous car. “That was surprising to us. Not only has the controller never been on a real car before, but it’s also never even seen the roads before and has no prior knowledge on how humans drive,” Amini says.

Forcing the controller to run through all types of driving scenarios enabled it to regain control from disorienting positions — such as being half off the road or into another lane — and steer back into the correct lane within several seconds. “And other state-of-the-art controllers all tragically failed at that, because they never saw any data like this in training,” Amini says.

Next, the researchers hope to simulate all types of road conditions from a single driving trajectory, such as night and day, and sunny and rainy weather. They also hope to simulate more complex interactions with other vehicles on the road. “What if other cars start moving and jump in front of the vehicle?” Rus says. “Those are complex, real-world interactions we want to start testing.”

Seen, stored, learned – Self-learning robots solve tasks with the help of an Ensenso 3D camera

At the Institute for Intelligent Process Automation and Robotics of the Karlsruhe Institute of Technology (KIT), the Robot Learning Group (ROLE) focuses on various aspects of machine learning. The scientists are investigating how robots can learn to solve tasks by trying them out independently.

This drone can play dodgeball – and win

By Nicola Nosengo

Drones can do many things, but avoiding obstacles is not their strongest suit yet – especially when they move quickly. Although many flying robots are equipped with cameras that can detect obstacles, it typically takes from 20 to 40 milliseconds for the drone to process the image and react. It may seem quick, but it is not enough to avoid a bird or another drone, or even a static obstacle when the drone itself is flying at high speed. This can be a problem when drones are used in unpredictable environments, or when there are many of them flying in the same area.

Reaction of a few milliseconds
In order to solve this problem, researchers at the University of Zurich have equipped a quadcopter (a drone with four propellers) with special cameras and algorithms that reduced its reaction time down to a few milliseconds – enough to avoid a ball thrown at it from a short distance. The results, published in Science Robotics, can make drones more effective in situations such as the aftermath of a natural disaster. The work was funded by the Swiss National Science Foundation through the National Center of Competence in Research (NCCR) Robotics.

“For search and rescue applications, such as after an earthquake, time is very critical, so we need drones that can navigate as fast as possible in order to accomplish more within their limited battery life” explains Davide Scaramuzza, who leads the Robotics and Perception Group at the University of Zurich as well as the NCCR Robotics Search and Rescue Grand Challenge. “However, by navigating fast drones are also more exposed to the risk of colliding with obstacles, and even more if these are moving. We realised that a novel type of camera, called Event Camera, are a perfect fit for this purpose”.

Event cameras have smart pixels
Traditional video cameras, such as the ones found in every smartphone, work by regularly taking snapshots of the whole scene. This is done by exposing the pixels of the image all at the same time. This way, though, a moving object can only be detected after all the pixels have been analysed by the on-board computer. Event cameras, on the other hand, have smart pixels that work independently of each other. The pixels that detect no changes remain silent, while the ones that see a change in light intensity immediately send out the information. This means that only a tiny fraction of the all pixels of the image will need to be processed by the onboard computer, therefore speeding up the computation a lot.

Event cameras are a recent innovation, and existing object-detection algorithms for drones do not work well with them. So the researchers had to invent their own algorithms that collect all the events recorded by the camera over a very short time, then subtracts the effect of the drone’s own movement – which typically account for most of the changes in what the camera sees.

Only 3.5 milliseconds to detect incoming objects
Scaramuzza and his team first tested the cameras and algorithms alone. They threw objects of various shapes and sizes towards the camera, and measured how efficient the algorithm was in detecting them. The success rate varied between 81 and 97 per cent, depending on the size of the object and the distance of the throw, and the system only took 3.5 milliseconds to detect incoming objects.

Then the most serious test began: putting cameras on an actual drone, flying it both indoor and outdoor and throwing objects directly at it. The drone was able to avoid the objects – including a ball thrown from a 3-meter distance and travelling at 10 meters per second – more than 90 per cent of the time. When the drone “knew” the size of the object in advance, one camera was enough. When, instead, it had to face objects of varying size, two cameras were used to give it stereoscopic vision.

According to Scaramuzza, these results show that event cameras can increase the speed at which drones can navigate by up to ten times, thus expanding their possible applications. “One day drones will be used for a large variety of applications, such as delivery of goods, transportation of people, aerial filmography and, of course, search and rescue,” he says. “But enabling robots to perceive and make decision faster can be a game changer for also for other domains where reliably detecting incoming obstacles plays a crucial role, such as automotive, good delivery, transportation, mining, and remote inspection with robots”.

Nearly as reliable as human pilots
In the future, the team aims to test this system on an even more agile quadrotor. “Our ultimate goal is to make one day autonomous drones navigate as good as human drone pilots. Currently, in all search and rescue applications where drones are involved, the human is actually in control. If we could have autonomous drones navigate as reliable as human pilots we would then be able to use them for missions that fall beyond line of sight or beyond the reach of the remote control”, says Davide Falanga, the PhD student who is the primary author of the article.

Page 336 of 455
1 334 335 336 337 338 455