Page 461 of 506
1 459 460 461 462 463 506

Garbage-collecting aqua drones and jellyfish filters for cleaner oceans

An aqua drone developed by the WasteShark project can collect litter in harbors before it gets carried out into the open sea. Image credit – WasteShark

By Catherine Collins

The cost of sea litter in the EU has been estimated at up to €630 million per year. It is mostly composed of plastics, which take hundreds of years to break down in nature, and has the potential to affect human health through the food chain because plastic waste is eaten by the fish that we consume.

‘I’m an accidental environmentalist,’ said Richard Hardiman, who runs a project called WASTESHARK. He says that while walking at his local harbour one day he stopped to watch two men struggle to scoop litter out of the sea using a pool net. Their inefficiency bothered Hardiman, and he set about trying to solve the problem. It was only when he delved deeper into the issue that he realised how damaging marine litter, and plastic in particular, can be, he says.

‘I started exploring where this trash goes – ocean gyres (circular currents), junk gyres, and they’re just full of plastic. I’m very glad that we’re now doing something to lessen the effects,’ he said.

Hardiman developed an unmanned robot, an aqua drone that cruises around urban waters such as harbours, marinas and canals, eating up marine litter like a Roomba of the sea. The waste is collected in a basket which the WasteShark then brings back to shore to be emptied, sorted and recycled.

The design of the autonomous drone is modelled on a whale shark, the ocean’s largest known fish. These giant filter feeders swim around with their mouths open and lazily eat whatever crosses their path.

It’s powered by rechargeable electric batteries, ensuring that it doesn’t pollute the environment through oil spillage or exhaust fumes, and it is relatively silent, avoiding noise pollution. It produces zero carbon emissions and the device moves quite slowly, allowing fish and birds to merely swim away when it gets too close for comfort.

‘We’ve tested it in areas of natural beauty and natural parks where we know it doesn’t harm the wildlife,’ said Hardiman. ‘We’re quite fortunate in that, all our research shows that it doesn’t affect the wildlife around.’

WasteShark’s autonomous drone is modelled on a whale shark. Credit – RanMarine Technology

WasteShark is one of a number of new inventions designed to tackle the problem of marine litter. A project called CLAIM is developing five different kinds of technology, one of which is a plasma-based tool called a pyrolyser. 

Useful gas

CLAIM’s pyrolyser will use heat treatment to break down marine litter to a useful gas. Plasma is basically ionised gas, capable of reaching very high temperatures of thousands of degrees. Such heat can break chemical bonds between atoms, converting waste into a type of gas called syngas.

The pyrolyser will be mounted onto a boat collecting floating marine litter – mainly large items of plastic which, if left in the sea, will decay into microplastic – so that the gas can then be used as an eco-friendly fuel to power the boat, or to provide energy for heating in ports.

Dr Nikoleta Bellou of the Hellenic Centre for Marine Research, one of the project coordinators of CLAIM, said: ‘We know that we humans are actually the key drivers for polluting our oceans. Unlike organic material, plastic never disappears in nature and it accumulates in the environment, especially in our oceans. It poses a threat not only to the health of our oceans and to the coasts but to humans, and has social, economic and ecological impacts.’

The researchers chose areas in the Mediterranean and Baltic Seas to act as their case studies throughout the project, and will develop models that can tell scientists which areas are most likely to become litter hotspots. A range of factors influence how littered a beach may be – it’s not only affected by litter louts in the surrounding area but also by circulating winds and currents which can carry litter great distances, dumping the waste on some particular beaches rather than others.

CLAIM’s other methods to tackle plastic pollution include a boom – a series of nets criss-crossing a river that catches all the large litter that would otherwise travel to the sea. The nets are then emptied and the waste is collected for treatment with the pyrolyser. There have been problems with booms in the past, when bad weather conditions cause the nets to overload and break, but CLAIM will use automated cameras and other sensors that could alert relevant authorities when the nets are full.

Microplastics

Large plastic pieces that can be scooped out of the water are one thing, but tiny particles known as microplastics that are less than 5mm wide pose a different problem. Scientists on the GoJelly project are using a surprising ingredient to create a filter that prevents microplastics from entering the sea – jellyfish slime.

The filter will be deployed at waste water management plants, a known source of microplastics. The method has already proven to be successful in the lab, and now GoJelly is planning to upscale the biotechnology for industrial use.

Dr Jamileh Javidpour of the GEOMAR Helmholtz Centre for Ocean Research Kiel, who coordinates the project, said: ‘We have to be innovative to stop microplastics from entering the ocean.’

The GoJelly project kills two birds with one stone – tackling the issue of microplastics while simultaneously addressing the problem of jellyfish blooms, where the creatures reproduce in high enough levels to blanket an area of ocean.

Jellyfish are one of the most ancient creatures on the planet, having swum in Earth’s oceans during the time of the dinosaurs. On the whole, due to a decline in natural predators and changes in the environment, they are thriving. When they bloom, jellyfish can attack swimmers and fisheries.

Fishermen often throw caught jellyfish back into the sea as a nuisance but, according to Dr Javidpour, jellyfish can be used much more sustainably. Not only can their slime be used to filter out microplastics, they can also be used as feed for aquaculture, for collagen in anti-ageing products, and even in food.

In fact, part of the GoJelly project involves producing a cookbook, showing people how to make delicious dishes from jellyfish. While Europeans may not be used to cooking with jellyfish, in many Asian cultures they are a daily staple. However, Dr Javidpour stresses that the goal is not to replace normal fisheries.

‘We are mainly ecologists, we know the role of jellyfish as part of a healthy ecosystem,’ she said. ‘We don’t want to switch from classical fishery to jellyfish fishery, but it is part of our task to investigate if it is doable, if it is sustainable.’

The research in this article has been funded by the EU.

Fleet of autonomous boats could service some cities, reducing road traffic

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Senseable City Lab have designed a fleet of autonomous boats that offer high maneuverability and precise control.
Courtesy of the researchers
By Rob Matheson

The future of transportation in waterway-rich cities such as Amsterdam, Bangkok, and Venice — where canals run alongside and under bustling streets and bridges — may include autonomous boats that ferry goods and people, helping clear up road congestion.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Senseable City Lab in the Department of Urban Studies and Planning (DUSP), have taken a step toward that future by designing a fleet of autonomous boats that offer high maneuverability and precise control. The boats can also be rapidly 3-D printed using a low-cost printer, making mass manufacturing more feasible.

The boats could be used to taxi people around and to deliver goods, easing street traffic. In the future, the researchers also envision the driverless boats being adapted to perform city services overnight, instead of during busy daylight hours, further reducing congestion on both roads and canals.

“Imagine shifting some of infrastructure services that usually take place during the day on the road — deliveries, garbage management, waste management — to the middle of the night, on the water, using a fleet of autonomous boats,” says CSAIL Director Daniela Rus, co-author on a paper describing the technology that’s being presented at this week’s IEEE International Conference on Robotics and Automation.

Moreover, the boats — rectangular 4-by-2-meter hulls equipped with sensors, microcontrollers, GPS modules, and other hardware — could be programmed to self-assemble into floating bridges, concert stages, platforms for food markets, and other structures in a matter of hours. “Again, some of the activities that are usually taking place on land, and that cause disturbance in how the city moves, can be done on a temporary basis on the water,” says Rus, who is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science.

The boats could also be equipped with environmental sensors to monitor a city’s waters and gain insight into urban and human health.

Co-authors on the paper are: first author Wei Wang, a joint postdoc in CSAIL and the Senseable City Lab; Luis A. Mateos and Shinkyu Park, both DUSP postdocs; Pietro Leoni, a research fellow, and Fábio Duarte, a research scientist, both in DUSP and the Senseable City Lab; Banti Gheneti, a graduate student in the Department of Electrical Engineering and Computer Science; and Carlo Ratti, a principal investigator and professor of the practice in the DUSP and director of the MIT Senseable City Lab.

Better design and control

The work was conducted as part of the “Roboat” project, a collaboration between the MIT Senseable City Lab and the Amsterdam Institute for Advanced Metropolitan Solutions (AMS). In 2016, as part of the project, the researchers tested a prototype that cruised around the city’s canals, moving forward, backward, and laterally along a preprogrammed path.

The ICRA paper details several important new innovations: a rapid fabrication technique, a more efficient and agile design, and advanced trajectory-tracking algorithms that improve control, precision docking and latching, and other tasks. 

To make the boats, the researchers 3-D-printed a rectangular hull with a commercial printer, producing 16 separate sections that were spliced together. Printing took around 60 hours. The completed hull was then sealed by adhering several layers of fiberglass.

Integrated onto the hull are a power supply, Wi-Fi antenna, GPS, and a minicomputer and microcontroller. For precise positioning, the researchers incorporated an indoor ultrasound beacon system and outdoor real-time kinematic GPS modules, which allow for centimeter-level localization, as well as an inertial measurement unit (IMU) module that monitors the boat’s yaw and angular velocity, among other metrics.

The boat is a rectangular shape, instead of the traditional kayak or catamaran shapes, to allow the vessel to move sideways and to attach itself to other boats when assembling other structures. Another simple yet effective design element was thruster placement. Four thrusters are positioned in the center of each side, instead of at the four corners, generating forward and backward forces. This makes the boat more agile and efficient, the researchers say.

The team also developed a method that enables the boat to track its position and orientation more quickly and accurately. To do so, they developed an efficient version of a nonlinear model predictive control (NMPC) algorithm, generally used to control and navigate robots within various constraints.

The NMPC and similar algorithms have been used to control autonomous boats before. But typically those algorithms are tested only in simulation or don’t account for the dynamics of the boat. The researchers instead incorporated in the algorithm simplified nonlinear mathematical models that account for a few known parameters, such as drag of the boat, centrifugal and Coriolis forces, and added mass due to accelerating or decelerating in water. The researchers also used an identification algorithm that then identifies any unknown parameters as the boat is trained on a path.

Finally, the researchers used an efficient predictive-control platform to run their algorithm, which can rapidly determine upcoming actions and increases the algorithm’s speed by two orders of magnitude over similar systems. While other algorithms execute in about 100 milliseconds, the researchers’ algorithm takes less than 1 millisecond.

Testing the waters

To demonstrate the control algorithm’s efficacy, the researchers deployed a smaller prototype of the boat along preplanned paths in a swimming pool and in the Charles River. Over the course of 10 test runs, the researchers observed average tracking errors — in positioning and orientation — smaller than tracking errors of traditional control algorithms.

That accuracy is thanks, in part, to the boat’s onboard GPS and IMU modules, which determine position and direction, respectively, down to the centimeter. The NMPC algorithm crunches the data from those modules and weighs various metrics to steer the boat true. The algorithm is implemented in a controller computer and regulates each thruster individually, updating every 0.2 seconds.

“The controller considers the boat dynamics, current state of the boat, thrust constraints, and reference position for the coming several seconds, to optimize how the boat drives on the path,” Wang says. “We can then find optimal force for the thrusters that can take the boat back to the path and minimize errors.”

The innovations in design and fabrication, as well as faster and more precise control algorithms, point toward feasible driverless boats used for transportation, docking, and self-assembling into platforms, the researchers say.

A next step for the work is developing adaptive controllers to account for changes in mass and drag of the boat when transporting people and goods. The researchers are also refining the controller to account for wave disturbances and stronger currents.

“We actually found that the Charles River has much more current than in the canals in Amsterdam,” Wang says. “But there will be a lot of boats moving around, and big boats will bring big currents, so we still have to consider this.”

The work was supported by a grant from AMS.

Making driverless cars change lanes more like human drivers do

At the International Conference on Robotics and Automation tomorrow, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new lane-change algorithm.
By Larry Hardesty

In the field of self-driving cars, algorithms for controlling lane changes are an important topic of study. But most existing lane-change algorithms have one of two drawbacks: Either they rely on detailed statistical models of the driving environment, which are difficult to assemble and too complex to analyze on the fly; or they’re so simple that they can lead to impractically conservative decisions, such as never changing lanes at all.

At the International Conference on Robotics and Automation tomorrow, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new lane-change algorithm that splits the difference. It allows for more aggressive lane changes than the simple models do but relies only on immediate information about other vehicles’ directions and velocities to make decisions.

“The motivation is, ‘What can we do with as little information as possible?’” says Alyssa Pierson, a postdoc at CSAIL and first author on the new paper. “How can we have an autonomous vehicle behave as a human driver might behave? What is the minimum amount of information the car needs to elicit that human-like behavior?”

Pierson is joined on the paper by Daniela Rus, the Viterbi Professor of Electrical Engineering and Computer Science; Sertac Karaman, associate professor of aeronautics and astronautics; and Wilko Schwarting, a graduate student in electrical engineering and computer science.

“The optimization solution will ensure navigation with lane changes that can model an entire range of driving styles, from conservative to aggressive, with safety guarantees,” says Rus, who is the director of CSAIL.

One standard way for autonomous vehicles to avoid collisions is to calculate buffer zones around the other vehicles in the environment. The buffer zones describe not only the vehicles’ current positions but their likely future positions within some time frame. Planning lane changes then becomes a matter of simply staying out of other vehicles’ buffer zones.

For any given method of computing buffer zones, algorithm designers must prove that it guarantees collision avoidance, within the context of the mathematical model used to describe traffic patterns. That proof can be complex, so the optimal buffer zones are usually computed in advance. During operation, the autonomous vehicle then calls up the precomputed buffer zones that correspond to its situation.

The problem is that if traffic is fast enough and dense enough, precomputed buffer zones may be too restrictive. An autonomous vehicle will fail to change lanes at all, whereas a human driver would cheerfully zip around the roadway.

With the MIT researchers’ system, if the default buffer zones are leading to performance that’s far worse than a human driver’s, the system will compute new buffer zones on the fly — complete with proof of collision avoidance.

That approach depends on a mathematically efficient method of describing buffer zones, so that the collision-avoidance proof can be executed quickly. And that’s what the MIT researchers developed.

They begin with a so-called Gaussian distribution — the familiar bell-curve probability distribution. That distribution represents the current position of the car, factoring in both its length and the uncertainty of its location estimation.

Then, based on estimates of the car’s direction and velocity, the researchers’ system constructs a so-called logistic function. Multiplying the logistic function by the Gaussian distribution skews the distribution in the direction of the car’s movement, with higher speeds increasing the skew.

The skewed distribution defines the vehicle’s new buffer zone. But its mathematical description is so simple — using only a few equation variables — that the system can evaluate it on the fly.

The researchers tested their algorithm in a simulation including up to 16 autonomous cars driving in an environment with several hundred other vehicles.

“The autonomous vehicles were not in direct communication but ran the proposed algorithm in parallel without conflict or collisions,” explains Pierson. “Each car used a different risk threshold that produced a different driving style, allowing us to create conservative and aggressive drivers. Using the static, precomputed buffer zones would only allow for conservative driving, whereas our dynamic algorithm allows for a broader range of driving styles.”

This project was supported, in part, by the Toyota Research Institute and the Office of Naval Research.

Videos from European Robotics Forum 2018


The European Robotics Forum 2018 (ERF2018), the most influential meeting of the robotics community in Europe, took place in Tampere on 13-15 March 2018. ERF2018 brought together over 900 leading scientists, companies, and policymakers.

Under the theme “Robots and Us”, the over 50 workshops cover current societal and technical themes, including human-robot-collaboration and how robotics can improve industrial productivity and service sector operations.

Click on the list below to watch: Opening Ceremony (13 March), euRobotics Awards Ceremony (14 March), Opening reception (13 March), and the following workshops:

The new H2020 robotics projects in the SPARC strategy
EU Projects offering services
Innovation in H2020 projects – EC Innovation Radar Prize 2017
Success Stories – Step Change Results from FP7 Projects
Drafting a Robot Manifesto
Innovation with Robotics in Regional Clusters

Credits: Visual Outcasts, Tampere Talo, Olli Perttula

British robot-using online grocer licensing their technology to US Kroger chain

Ocado , the UK leader in home-delivered groceries from robot-run distribution centers, has established a licensing deal with US grocery chain Kroger (NYSE:KR) whereby Kroger will take a 5% stake in Ocado – an investment valued at ~$247.5 million and Ocado will help Kroger set up systems to manage online ordering, fulfillment and delivery operations utilizing Ocado-proven technologies. Ocado will see the Kroger chain build up to 20 Ocado-designed robot-run warehouses over its first three years.

In a recent letter to Kroger stockholders, CEO Rodney McMullen said that Kroger is redeploying capital to emphasize improving its digital capabilities and enabling customers to shop in the store, by ordering online and picking up their order at the store, or getting their groceries delivered to their homes. Although McMullen didn’t single out Amazon or any other competitors in the supermarket arena, Amazon’s acquisition of Whole Foods and Walmart’s price-cutting moves and partnering with online grocery deliver service Instacart are rapidly changing the landscape of grocery shopping.

“Kroger is right in the middle of such a reinvention,” McMullen said in the shareholder letter. “We are proactively addressing customer changes and we’re making strategic investments to create the future of retail: a seamless digital experience, customer-centric technology solutions, an enhanced associate experience, space-optimized stores and smart-priced products.”

Ocado has begun to commercialize its technologies and signed its first major deal outside the U.K. with Casino, the operator of French supermarket chain Monoprix. Then came Canada’s Sobeys in January, and this month it was the turn of Sweden’s ICA. The Kroger deal is the biggest yet, and Ocado’s share price is at the time of writing up 56% on the news.

Ocado invested $57.5 million on technology in 2017, up from $46 million the previous year. The company is developing and deploying proprietary technology, has a tech staff of 1,100, and uses about 500 robots interacting with each other on a stacked grid which has allowed it to process more than 20,000 daily orders.

Earlier this year (in February) Ocado raised ~$192.5 million by selling shares. Back then the stock was priced at £487. Today it closed at £861, an increase of 76%!

MiR200 – Autonomous Mobile Cobot

MiR200 is a safe, cost-effective mobile robot that automates your internal transportation. The robot optimizes workflows, freeing staff resources so you can increase productivity and reduce costs. MiR200 safely maneuvers around people and obstacles, through doorways and in and out of lifts. You can download CAD files of the building directly to the robot, or program it with the simple, web-based interface that requires no prior programming experience. With its fast implementation, the robots offers a fast ROI, with payback in as little as a year.

New robots set to transform farming

European consumers expect a clean supply chain and biodiversity to be conserved. Therefore, reducing the inputs of pesticides and chemical fertilisers to a minimum and/or replacing them by agro-ecological or robot solutions is required. Furthermore, the average age of European farmers is among the highest of all sectors, thus farming needs to attract young people with attractive working opportunities.

RIA to Present Gudrun Litzenberger and Esben Østergaard with Engelberger Robotics Awards

The awards recognize outstanding individuals from all over the world. Since the award’s inception in 1977, it has been bestowed upon 126 robotics leaders from 17 different nations. This year the awards will be presented in the categories of leadership and technology.

Robots in Depth with Paul Ekas

In this episode of Robots in Depth, Per Sjöborg speaks with Paul Ekas about his EZGripper and how he designed it to be low cost, lightweight, robust and to offer reliable gripping of small and large objects.

The EZGripper is a tendon based gripper using Dyneema tendons and aluminum oxide eyelets to make it durable and able to handle rough environments.

The EZGripper is under-actuated, the fingers stay straight when picking up small objects and wrap around large objects. You can control position and torque allowing you to grip soft or hard objects and to do so gently or firmly.

We want to end the de-industrialisation of Europe – Prof. Jürgen Rüttgers

Artificial intelligence software technologies are very important for European industry, says Prof. Jürgen Rüttgers. Image credit – Dirk Vorderstraße, licensed under CC BY 2.0
by Rex Merrifield

He leads the High Level Group on Industrial Technologies, which on 24 April released a report called Re-finding industry – Defining Innovation to make recommendations on EU research and innovation priorities for industry in the next funding programme.

Your report says that AI should be designated as a key enabling technology in the next funding programme, which means it is classed as a priority policy area. What does this mean in practice?

‘Artificial intelligence is something relatively new and a field of strong competition, not only in the world, but also in Europe. It is therefore essential to find a common way ahead to encourage our research. Europe has long experience with identifying key enabling technologies and through these we can organise research and innovation, so we find the best answers from new technologies to benefit our industries.

‘That was why we are defining not only a new innovation policy, but also a policy to re-find industry. We want to end the de-industrialisation of Europe of recent years and by setting these technologies as priorities, we can find a strong way ahead for industry.

‘If we do it well, and I am sure that is possible, we also have the possibility in Europe to recover jobs we have been losing abroad. That is an idea that is very closely tied with the idea of strong productivity growth. But we are aiming not only for economic growth. Among our recommendations is that the European Union and Member States aim for inclusive growth and sustainable protection of our planet. That is essential to our idea for the future.’

 

What other technologies are crucial to Europe’s industry?

‘Industry is central to Europe’s economy. It contributes to Europeans’ prosperity and provides jobs to 36 million people in Europe – one in five jobs. So the new key enabling technologies for the 21st century need to put us in front in the global competition between Europe, the United States and China. We look at these technologies in three areas.

‘It is very important for the EU to prioritise production technologies. These include advanced manufacturing technologies, advanced materials and nanotechnologies, and life science.

‘And in the 21st century it is essential to encourage digital technologies – micro- and nano-electronics and photonics.

‘Thirdly, we also need to advance cyber technologies and central to this is artificial intelligence, along with cyber security and connectivity.

‘If you look at data generation and handling, big data analytics, machine learning and deep learning, robots and virtual agents, you can see that artificial intelligence software technologies are very important for European industry in this century.’

What impact do you hope that your report will have on research output, industry and for the wider public?

‘Central to the great challenges in the 21st century is the transition from the industrial society to the knowledge society. In this society, we have a new production factor – knowledge, which is a most important resource.

‘We also face globalisation, through communication without borders. In the future, no economy can be organised only in a national way.

‘The knowledge society will be accelerated by the widespread digitisation of the economy and society, leading to this process being called the ‘digital revolution’. It is comparable in scope and impact to the industrial revolution of some 200 years ago.

‘Many people fear that this change is not good for them, so it is necessary to have open discussion in all European Member States about this great change. If we manage it well, we will have more jobs, more productivity growth and will be at the front of international competition. But we will only have a chance to achieve this future if we have a strong research ecosystem.’

What else needs to happen to encourage innovation?

‘The European policy for the future is research and innovation policy. It is currently under-financed and for a new system of innovation to grow strongly, we must also have a bigger budget. I believe a most important decision for the European Council in coming months is the debate over the budget for 2021-2027.

‘And we also know what is necessary to do over the next decade, together with all citizens in Europe, to achieve success. I believe we also need a clear message to the people of Europe, so that they will agree.’

Your proposals emphasise sustainability, job creation, fighting inequalities and support to democracy. Could you explain the importance of these elements in research and innovation?

‘In research and innovation policy there is not only an argument for jobs, but also the argument that it is good to live in a united Europe with these values.

‘Young people must see they have a chance to work, have good education, to have good vocational or academic jobs. And we need to make sure there is a chance for more new jobs in new start-ups, in new small and medium enterprises, in new industries and in a new innovation system. If we do that, we will see great acceptance among European Union citizens. They will accept new things, not fear them. The reason is that they know we have an inclusive society and not only a very successful economy.’

If you liked this article, please consider sharing it on social media.

Researchers develop virtual-reality testing ground for drones

MIT engineers have developed a new virtual-reality training system for drones that enables a vehicle to “see” a rich, virtual environment while flying in an empty physical space.
Image: William Litant
By Jennifer Chu

Training drones to fly fast, around even the simplest obstacles, is a crash-prone exercise that can have engineers repairing or replacing vehicles with frustrating regularity.

Now MIT engineers have developed a new virtual-reality training system for drones that enables a vehicle to “see” a rich, virtual environment while flying in an empty physical space.

The system, which the team has dubbed “Flight Goggles,” could significantly reduce the number of crashes that drones experience in actual training sessions. It can also serve as a virtual testbed for any number of environments and conditions in which researchers might want to train fast-flying drones.

“We think this is a game-changer in the development of drone technology, for drones that go fast,” says Sertac Karaman, associate professor of aeronautics and astronautics at MIT. “If anything, the system can make autonomous vehicles more responsive, faster, and more efficient.”

Karaman and his colleagues will present details of their virtual training system at the IEEE International Conference on Robotics and Automation next week. Co-authors include Thomas Sayre-McCord, Winter Guerra, Amado Antonini, Jasper Arneberg, Austin Brown, Guilherme Cavalheiro, Dave McCoy, Sebastian Quilter, Fabian Riether, Ezra Tal, Yunus Terzioglu, and Luca Carlone of MIT’s Laboratory for Information and Decision Systems, along with Yajun Fang of MIT’s Computer Science and Artificial Intelligence Laboratory, and Alex Gorodetsky of Sandia National Laboratories.

Pushing boundaries

Karaman was initially motivated by a new, extreme robo-sport: competitive drone racing, in which remote-controlled drones, driven by human players, attempt to out-fly each other through an intricate maze of windows, doors, and other obstacles. Karaman wondered: Could an autonomous drone be trained to fly just as fast, if not faster, than these human-controlled vehicles, with even better precision and control?

“In the next two or three years, we want to enter a drone racing competition with an autonomous drone, and beat the best human player,” Karaman says. To do so, the team would have to develop an entirely new training regimen.

Currently, training autonomous drones is a physical task: Researchers fly drones in large, enclosed testing grounds, in which they often hang large nets to catch any careening vehicles. They also set up props, such as windows and doors, through which a drone can learn to fly. When vehicles crash, they must be repaired or replaced, which delays development and adds to a project’s cost.

Karaman says testing drones in this way can work for vehicles that are not meant to fly fast, such as drones that are programmed to slowly map their surroundings. But for fast-flying vehicles that need to process visual information quickly as they fly through an environment, a new training system is necessary.

“The moment you want to do high-throughput computing and go fast, even the slightest changes you make to its environment will cause the drone to crash,” Karaman says. “You can’t learn in that environment. If you want to push boundaries on how fast you can go and compute, you need some sort of virtual-reality environment.”

Flight Goggles

The team’s new virtual training system comprises a motion capture system, an image rendering program, and electronics that enable the team to quickly process images and transmit them to the drone.

The actual test space — a hangar-like gymnasium in MIT’s new drone-testing facility in Building 31 — is lined with motion-capture cameras that track the orientation of the drone as it’s flying.

With the image-rendering system, Karaman and his colleagues can draw up photorealistic scenes, such as a loft apartment or a living room, and beam these virtual images to the drone as it’s flying through the empty facility.    

“The drone will be flying in an empty room, but will be ‘hallucinating’ a completely different environment, and will learn in that environment,” Karaman explains.

The virtual images can be processed by the drone at a rate of about 90 frames per second — around three times as fast as the human eye can see and process images. To enable this, the team custom-built circuit boards that integrate a powerful embedded supercomputer, along with an inertial measurement unit and a camera. They fit all this hardware into a small, 3-D-printed nylon and carbon-fiber-reinforced drone frame. 

A crash course

The researchers carried out a set of experiments, including one in which the drone learned to fly through a virtual window about twice its size. The window was set within a virtual living room. As the drone flew in the actual, empty testing facility, the researchers beamed images of the living room scene, from the drone’s perspective, back to the vehicle. As the drone flew through this virtual room, the researchers tuned a navigation algorithm, enabling the drone to learn on the fly.

Over 10 flights, the drone, flying at around 2.3 meters per second (5 miles per hour), successfully flew through the virtual window 361 times, only “crashing” into the window three times, according to positioning information provided by the facility’s motion-capture cameras. Karaman points out that, even if the drone crashed thousands of times, it wouldn’t make much of an impact on the cost or time of development, as it’s crashing in a virtual environment and not making any physical contact with the real world.

In a final test, the team set up an actual window in the test facility, and turned on the drone’s onboard camera to enable it to see and process its actual surroundings. Using the navigation algorithm that the researchers tuned in the virtual system, the drone, over eight flights, was able to fly through the real window 119 times, only crashing or requiring human intervention six times.

“It does the same thing in reality,” Karaman says. “It’s something we programmed it to do in the virtual environment, by making mistakes, falling apart, and learning. But we didn’t break any actual windows in this process.”

He says the virtual training system is highly malleable. For instance, researchers can pipe in their own scenes or layouts in which to train drones, including detailed, drone-mapped replicas of actual buildings — something the team is considering doing with MIT’s Stata Center. The training system may also be used to test out new sensors, or specifications for existing sensors, to see how they might handle on a fast-flying drone.

“We could try different specs in this virtual environment and say, ‘If you build a sensor with these specs, how would it help a drone in this environment?’’ Karaman says.

The system can also be used to train drones to fly safely around humans. For instance, Karaman envisions splitting the actual test facility in two, with a drone flying in one half, while a human, wearing a motion-capture suit, walks in the other half. The drone would “see” the human in virtual reality as it flies around its own space. If it crashes into the person, the result is virtual, and harmless.

“One day, when you’re really confident, you can do it in reality, and have a drone flying around a person as they’re running, in a safe way,” Karaman says. “There are a lot of mind-bending experiments you can do in this whole virtual reality thing. Over time, we will showcase all the things you can do.”

This research was supported, in part, by U.S. Office of Naval Research, MIT Lincoln Laboratory, and the NVIDIA Corporation.

Page 461 of 506
1 459 460 461 462 463 506