Archive 26.04.2021

Page 2 of 5
1 2 3 4 5

An AR interface to assist human agents during critical missions

In recent years, computer scientists and roboticists have developed a variety of technological tools to aid human agents during critical missions, such as military operations or search and rescue efforts. Unmanned aerial vehicles (UAVs) have proved to be particularly valuable in these cases, as they can often enter remote or dangerous areas that are inaccessible to humans.

As the use of robotics and automation increases, so too does the need to protect these valuable assets.

The risk to humans in warehouses is well known with the UK logistics sector reporting around 28,000 non-fatal accidents at work annually but businesses are now realising the risk of safety to highly sophisticated and expensive assets noting that these too require protection.

Navigating beneath the Arctic ice

For scientists to understand the role the changing environment in the Arctic Ocean plays in global climate change, there is a need to map the ocean below the ice cover. Image credits: Troy Barnhart, Chief Petty Officer, U.S. Navy

By Mary Beth Gallagher | Department of Mechanical Engineering

There is a lot of activity beneath the vast, lonely expanses of ice and snow in the Arctic. Climate change has dramatically altered the layer of ice that covers much of the Arctic Ocean. Areas of water that used to be covered by a solid ice pack are now covered by thin layers only 3 feet deep. Beneath the ice, a warm layer of water, part of the Beaufort Lens, has changed the makeup of the aquatic environment.    

For scientists to understand the role this changing environment in the Arctic Ocean plays in global climate change, there is a need for mapping the ocean below the ice cover.

A team of MIT engineers and naval officers led by Henrik Schmidt, professor of mechanical and ocean engineering, is trying to understand environmental changes, their impact on acoustic transmission beneath the surface, and how these changes affect navigation and communication for vehicles traveling below the ice.

“Basically, what we want to understand is how does this new Arctic environment brought about by global climate change affect the use of underwater sound for communication, navigation, and sensing?” explains Schmidt.

To answer this question, Schmidt traveled to the Arctic with members of the Laboratory for Autonomous Marine Sensing Systems (LAMSS) including Daniel Goodwin and Bradli Howard, graduate students in the MIT-Woods Hole Oceanographic Institution Joint Program in oceanographic engineering.

With funding from the Office of Naval Research, the team participated in ICEX — or Ice Exercise — 2020, a three-week program hosted by the U.S. Navy, where military personnel, scientists, and engineers work side-by-side executing a variety of research projects and missions.

A strategic waterway

The rapidly changing environment in the Arctic has wide-ranging impacts. In addition to giving researchers more information about the impact of global warming and the effects it has on marine mammals, the thinning ice could potentially open up new shipping lanes and trade routes in areas that were previously untraversable.

Perhaps most crucially for the U.S. Navy, understanding the altered environment also has geopolitical importance.

“If the Arctic environment is changing and we don’t understand it, that could have implications in terms of national security,” says Goodwin.

Several years ago, Schmidt and his colleague Arthur Baggeroer, professor of mechanical and ocean engineering, were among the first to recognize that the warmer waters, part of the Beaufort Lens, coupled with the changing ice composition, impacted how sound traveled in the water.

To successfully navigate throughout the Arctic, the U.S. Navy and other entities in the region need to understand how these changes in sound propagation affect a vehicle’s ability to communicate and navigate through the water.

Using an unpiloted, autonomous underwater vehicle (AUV) built by General Dynamics-Mission Systems (GD-MS), and a system of sensors rigged on buoys developed by the Woods Hole Oceanographic Institution, Schmidt and his team, joined by Dan McDonald and Josiah DeLange of GD-MS, set out to demonstrate a new integrated acoustic communication and navigation concept.

The research team prepares to deploy an autonomous underwater vehicle built by General Dynamics Mission Systems to test their navigational concept. Image credits: Daniel Goodwin, LCDR, USN

The framework, which was also supported and developed by LAMSS members Supun Randeni, EeShan Bhatt, Rui Chen, and Oscar Viquez, as well as LAMSS alumnus Toby Schneider of GobySoft LLC, would allow vehicles to travel through the water with GPS-level accuracy while employing oceanographic sensors for data collection.

“In order to prove that you can use this navigational concept in the Arctic, we have to first ensure we fully understand the environment that we’re operating in,” adds Goodwin.

Understanding the environment below

After arriving at the Arctic Submarine Lab’s ice camp last spring, the research team deployed a number of conductivity-temperature-depth probes to gather data about the aquatic environment in the Arctic.

“By using temperature and salinity as a function of depth, we calculate the sound speed profile. This helps us understand if the AUV’s location is good for communication or bad,” says Howard, who was responsible for monitoring environmental changes to the water column throughout ICEX.

A team including professor Henrik Schmidt, MIT-WHOI Joint Program graduate students Daniel Goodwin and Bradli Howard, members of the Laboratory for Autonomous Marine Sensing Systems, and the Arctic Submarine Lab, traveled to the Arctic in March 2020 as part of the ICEX 2020, a three-week program hosted by the U.S. Navy, where military personnel, scientists and engineers work side-by-side executing a variety of research projects and missions. Image credits: MIke Demello, Artict Submarine Laboratory

Because of the way sound bends in water, through a concept known as Snell’s Law, sine-like pressure waves collect in some parts of the water column and disperse in others. Understanding the propagation trajectories is key to predicting good and bad locations for the AUV to operate.  

To map the areas of the water with optimal acoustic properties, Howard modified the traditional signal-to-noise-ratio (SNR) by using a metric known as the multi-path penalty (MPP), which penalizes areas where the AUV receives echoes of the messages. As a result, the vehicle prioritizes operations in areas with less reverb.

These data allowed the team to identify exactly where the vehicle should be positioned in the water column for optimal communications which results in accurate navigation.

While Howard gathered data on how the characteristics of the water impact acoustics, Goodwin focused on how sound is projected and reflected off the ever-changing ice on the surface.

To get these data, the AUV was outfitted with a device that measured the motion of the vehicle relative to the ice above. That sound was picked up by several receivers attached to moorings hanging from the ice.

The data from the vehicle and the receivers were then used by the researchers to compute exactly where the vehicle was at a given time. This location information, together with the data Howard gathered on the acoustic environment in the water, offer a new navigational concept for vehicles traveling in the Arctic Sea.

Protecting the Arctic

After a series of setbacks and challenges due to the unforgiving conditions in the Arctic, the team was able to successfully prove their navigational concept worked. Thanks to the team’s efforts, naval operations and future trade vessels may be able to take advantage of the changing conditions in the Arctic to maximize navigational accuracy and improve underwater communications.

After a series of setbacks and challenges due to the unforgiving conditions in the Arctic, the team was able to successfully prove their navigational concept worked. Image credits: Dan McDonald, General Dynamics Mission Systems

“Our work could improve the ability for the U.S. Navy to safely and effectively operate submarines under the ice for extended periods,” Howard says.

Howard acknowledges that in addition to the changes in physical climate, the geopolitical climate continues to change. This only strengthens the need for improved navigation in the Arctic.

“The U.S. Navy’s goal is to preserve peace and protect global trade by ensuring freedom of navigation throughout the world’s oceans,” she adds. “The navigational concept we proved during ICEX will serve to help the Navy in that mission.”

Simple robots, smart algorithms

Anyone with children knows that while controlling one child can be hard, controlling many at once can be nearly impossible. Getting swarms of robots to work collectively can be equally challenging, unless researchers carefully choreograph their interactions—like planes in formation—using increasingly sophisticated components and algorithms. But what can be reliably accomplished when the robots on hand are simple, inconsistent, and lack sophisticated programming for coordinated behavior?

ManipulaTHOR: a framework for visual object manipulation

The Allen Institute for AI (AI2) announced the 3.0 release of its embodied artificial intelligence framework AI2-THOR, which adds active object manipulation to its testing framework. ManipulaTHOR is a first of its kind virtual agent with a highly articulated robot arm equipped with three joints of equal limb length and composed entirely of swivel joints to bring a more human-like approach to object manipulation.

AI2-THOR is the first testing framework to study the problem of object manipulation in more than 100 visually rich, physics-enabled rooms. By enabling the training and evaluation of generalized capabilities in manipulation models, ManipulaTHOR allows for much faster training in more complex environments as compared to current real-world training methods, while also being far safer and more cost-effective.

“Imagine a robot being able to navigate a kitchen, open a refrigerator and pull out a can of soda. This is one of the biggest and yet often overlooked challenges in robotics and AI2-THOR is the first to design a benchmark for the task of moving objects to various locations in virtual rooms, enabling reproducibility and measuring progress,” said Dr. Oren Etzioni, CEO at AI2. “After five years of hard work, we can now begin to train robots to perceive and navigate the world more like we do, making real-world usage models more attainable than ever before.”

Despite being an established research area in robotics, the visual reasoning aspect of object manipulation has consistently been one of the biggest hurdles researchers face. In fact, it’s long been understood that robots struggle to correctly perceive, navigate, act, and communicate with others in the world. AI2-THOR solves this problem with complex simulated testing environments that researchers can use to train robots for eventual activities in the real world.

With the pioneering of embodied AI through AI2-THOR, the landscape has changed for the common good. AI2-THOR enables researchers to efficiently devise solutions that address the object manipulation issue, and also other traditional problems associated with robotics testing.

“In comparison to running an experiment on an actual robot, AI2-THOR is incredibly fast and safe,” said Roozbeh Mottaghi, Research Manager at AI2. “Over the years, AI2-THOR has enabled research on many different tasks such as navigation, instruction following, multi-agent collaboration, performing household tasks, reasoning if an object can be opened or not. This evolution of AI2-THOR allows researchers and scientists to scale the current limits of embodied AI.”

In addition to the 3.0 release, the team is hosting the RoboTHOR Challenge 2021 in conjunction with the Embodied AI Workshop at this year’s Conference on Computer Vision and Pattern Recognition (CVPR). AI2’s challenges cover RoboTHOR object navigation; ALFRED (instruction following robots); and Room Rearrangement.

To read AI2-THOR’s ManipulaTHOR paper: ai2thor.allenai.org/publications

Biohybrid soft robot with self-stimulating skeleton outswims other biobots

A team of researchers working at Barcelona Institute of Science and Technology has developed a skeletal-muscle-based, biohybrid soft robot that can swim faster than other skeletal-muscle-based biobots. In their paper published in the journal Science Robotics, the group describes building and testing their soft robot.

Perfecting self-driving cars – can it be done?

posteriori/Shutterstock

Robotic vehicles have been used in dangerous environments for decades, from decommissioning the Fukushima nuclear power plant or inspecting underwater energy infrastructure in the North Sea. More recently, autonomous vehicles from boats to grocery delivery carts have made the gentle transition from research centres into the real world with very few hiccups.

Yet the promised arrival of self-driving cars has not progressed beyond the testing stage. And in one test drive of an Uber self-driving car in 2018, a pedestrian was killed by the vehicle. Although these accidents happen every day when humans are behind the wheel, the public holds driverless cars to far higher safety standards, interpreting one-off accidents as proof that these vehicles are too unsafe to unleash on public roads.

A small trolley-like robot with a flag on a city street.
If only it were as easy as autonomous grocery delivery robots.
Jonathan Weiss/Shutterstock

Programming the perfect self-driving car that will always make the safest decision is a huge and technical task. Unlike other autonomous vehicles, which are generally rolled out in tightly controlled environments, self-driving cars must function in the endlessly unpredictable road network, rapidly processing many complex variables to remain safe.

Inspired by the highway code, we’re working on a set of rules that will help self-driving cars make the safest decisions in every conceivable scenario. Verifying that these rules work is the final roadblock we must overcome to get trustworthy self-driving cars safely onto our roads.

Asimov’s first law

Science fiction author Isaac Asimov penned the “three laws of robotics” in 1942. The first and most important law reads: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” When self-driving cars injure humans, they clearly violate this first law.




Read more:
Are self-driving cars safe? Expert on how we will drive in the future


We at the National Robotarium are leading research intended to guarantee that self-driving vehicles will always make decisions that abide by this law. Such a guarantee would provide the solution to the very serious safety concerns that are preventing self-driving cars from taking off worldwide.

A red alert box around a women on a zebra crossing pushing a pram
Self-driving cars must spot, process, and make decisions about hazards and risks almost instantly.
Jiraroj Praditcharoenkul/Alamy

AI software is actually quite good at learning about scenarios it has never faced. Using “neural networks” that take their inspiration from the layout of the human brain, such software can spot patterns in data, like the movements of cars and pedestrians, and then recall these patterns in novel scenarios.

But we still need to prove that any safety rules taught to self-driving cars will work in these new scenarios. To do this, we can turn to formal verification: the method that computer scientists use to prove that a rule works in all circumstances.

In mathematics, for example, rules can prove that x + y is equal to y + x without testing every possible value of x and y. Formal verification does something similar: it allows us to prove how AI software will react to different scenarios without our having to exhaustively test every scenario that could occur on public roads.

One of the more notable recent successes in the field is the verification of an AI system that uses neural networks to avoid collisions between autonomous aircraft. Researchers have successfully formally verified that the system will always respond correctly, regardless of the horizontal and vertical manoeuvres of the aircraft involved.

Highway coding

Human drivers follow a highway code to keep all road users safe, which relies on the human brain to learn these rules and applying them sensibly in innumerable real-world scenarios. We can teach self-driving cars the highway code too. That requires us to unpick each rule in the code, teach vehicles’ neural networks to understand how to obey each rule, and then verify that they can be relied upon to safely obey these rules in all circumstances.

However, the challenge of verifying that these rules will be safely followed is complicated when examining the consequences of the phrase “must never” in the highway code. To make a self-driving car as reactive as a human driver in any given scenario, we must program these policies in such a way that accounts for nuance, weighted risk and the occasional scenario where different rules are in direct conflict, requiring the car to ignore one or more of them.


Robot ethicist Patrick Lin introducing the complexity of automated decision-making in self-driving cars.

Such a task cannot be left solely to programmers – it’ll require input from lawyers, security experts, system engineers and policymakers. Within our newly formed AISEC project, a team of researchers is designing a tool to facilitate the kind of interdisciplinary collaboration needed to create ethical and legal standards for self-driving cars.

Teaching self-driving cars to be perfect will be a dynamic process: dependent upon how legal, cultural and technological experts define perfection over time. The AISEC tool is being built with this in mind, offering a “mission control panel” to monitor, supplement and adapt the most successful rules governing self-driving cars, which will then be made available to the industry.

We’re hoping to deliver the first experimental prototype of the AISEC tool by 2024. But we still need to create adaptive verification methods to address remaining safety and security concerns, and these will likely take years to build and embed into self-driving cars.

Accidents involving self-driving cars always create headlines. A self-driving car that recognises a pedestrian and stops before hitting them 99% of the time is a cause for celebration in research labs, but a killing machine in the real world. By creating robust, verifiable safety rules for self-driving cars, we’re attempting to make that 1% of accidents a thing of the past.


The Conversation

e.komendantskaya@hw.ac.uk receives funding from EPSRC, NCSC, DSTL.

Luca Arnaboldi and Matthew Daggitt do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Original post published in The Conversation.

Robohub and AIhub’s free workshop trial on sci-comm of robotics and AI

A robot in a field
Image credit: wata1219 on flickr (CC BY-NC-ND 2.0)

Would you like to learn how to tell your robotics/AI story to the public? Robohub and AIhub are testing a new workshop to train you as the next generation of communicators. You will learn to quickly create your story and shape it to any format, from short tweets to blog posts and beyond. In addition, you will learn how to communicate about robotics/AI in a realistic way (avoiding the hype), and will receive tips from top communicators, science journalists and ealy career researchers. If you feel like being part of our beta testers, join this free workshop to experience how much impact science communication can have on your professional journey!

The workshop is taking place on Friday the 30th of April, 10am-12.30pm (UK time) via Zoom. Please, sign up by sending an email to daniel.carrillozapata@robohub.org.

Pepper the robot talks to itself to improve its interactions with people

Ever wondered why your virtual home assistant doesn't understand your questions? Or why your navigation app took you on the side street instead of the highway? In a study published April 21st in the journal iScience, Italian researchers designed a robot that "thinks out loud" so that users can hear its thought process and better understand the robot's motivations and decisions.

DLL: A map-based localization framework for aerial robots

To enable the efficient operation of unmanned aerial vehicles (UAVs) in instances where a global localization system (GPS) or an external positioning device (e.g., a laser reflector) is unavailable, researchers must develop techniques that automatically estimate a robot's pose. If the environment in which a drone operates does not change very often and one is able to build a 3D map of this environment, map-based robot localization techniques can be fairly effective.

Robots benefit special education students

Researchers at the University of Twente have discovered that primary school children in both regular and special needs schools make strides when they learn together with a robot. On 30 April, both Daniel Davison and Bob Schadenberg will obtain their Ph.D.s from UT, with comparable research but working in different contexts.
Page 2 of 5
1 2 3 4 5