Page 442 of 466
1 440 441 442 443 444 466

Computer systems predict objects’ responses to physical forces

As part of an investigation into the nature of humans’ physical intuitions, MIT researchers trained a neural network to predict how unstably stacked blocks would respond to the force of gravity.
Image: Christine Daniloff/MIT

Josh Tenenbaum, a professor of brain and cognitive sciences at MIT, directs research on the development of intelligence at the Center for Brains, Minds, and Machines, a multiuniversity, multidisciplinary project based at MIT that seeks to explain and replicate human intelligence.

Presenting their work at this year’s Conference on Neural Information Processing Systems, Tenenbaum and one of his students, Jiajun Wu, are co-authors on four papers that examine the fundamental cognitive abilities that an intelligent agent requires to navigate the world: discerning distinct objects and inferring how they respond to physical forces.

By building computer systems that begin to approximate these capacities, the researchers believe they can help answer questions about what information-processing resources human beings use at what stages of development. Along the way, the researchers might also generate some insights useful for robotic vision systems.

“The common theme here is really learning to perceive physics,” Tenenbaum says. “That starts with seeing the full 3-D shapes of objects, and multiple objects in a scene, along with their physical properties, like mass and friction, then reasoning about how these objects will move over time. Jiajun’s four papers address this whole space. Taken together, we’re starting to be able to build machines that capture more and more of people’s basic understanding of the physical world.”

Three of the papers deal with inferring information about the physical structure of objects, from both visual and aural data. The fourth deals with predicting how objects will behave on the basis of that data.

Two-way street

Something else that unites all four papers is their unusual approach to machine learning, a technique in which computers learn to perform computational tasks by analyzing huge sets of training data. In a typical machine-learning system, the training data are labeled: Human analysts will have, say, identified the objects in a visual scene or transcribed the words of a spoken sentence. The system attempts to learn what features of the data correlate with what labels, and it’s judged on how well it labels previously unseen data.

In Wu and Tenenbaum’s new papers, the system is trained to infer a physical model of the world — the 3-D shapes of objects that are mostly hidden from view, for instance. But then it works backward, using the model to resynthesize the input data, and its performance is judged on how well the reconstructed data matches the original data.

For instance, using visual images to build a 3-D model of an object in a scene requires stripping away any occluding objects; filtering out confounding visual textures, reflections, and shadows; and inferring the shape of unseen surfaces. Once Wu and Tenenbaum’s system has built such a model, however, it rotates it in space and adds visual textures back in until it can approximate the input data.

Indeed, two of the researchers’ four papers address the complex problem of inferring 3-D models from visual data. On those papers, they’re joined by four other MIT researchers, including William Freeman, the Perkins Professor of Electrical Engineering and Computer Science, and by colleagues at DeepMind, ShanghaiTech University, and Shanghai Jiao Tong University.

Divide and conquer

The researchers’ system is based on the influential theories of the MIT neuroscientist David Marr, who died in 1980 at the tragically young age of 35. Marr hypothesized that in interpreting a visual scene, the brain first creates what he called a 2.5-D sketch of the objects it contained — a representation of just those surfaces of the objects facing the viewer. Then, on the basis of the 2.5-D sketch — not the raw visual information about the scene — the brain infers the full, three-dimensional shapes of the objects.

“Both problems are very hard, but there’s a nice way to disentangle them,” Wu says. “You can do them one at a time, so you don’t have to deal with both of them at the same time, which is even harder.”

Wu and his colleagues’ system needs to be trained on data that include both visual images and 3-D models of the objects the images depict. Constructing accurate 3-D models of the objects depicted in real photographs would be prohibitively time consuming, so initially, the researchers train their system using synthetic data, in which the visual image is generated from the 3-D model, rather than vice versa. The process of creating the data is like that of creating a computer-animated film.

Once the system has been trained on synthetic data, however, it can be fine-tuned using real data. That’s because its ultimate performance criterion is the accuracy with which it reconstructs the input data. It’s still building 3-D models, but they don’t need to be compared to human-constructed models for performance assessment.

In evaluating their system, the researchers used a measure called intersection over union, which is common in the field. On that measure, their system outperforms its predecessors. But a given intersection-over-union score leaves a lot of room for local variation in the smoothness and shape of a 3-D model. So Wu and his colleagues also conducted a qualitative study of the models’ fidelity to the source images. Of the study’s participants, 74 percent preferred the new system’s reconstructions to those of its predecessors.

All that fall

In another of Wu and Tenenbaum’s papers, on which they’re joined again by Freeman and by researchers at MIT, Cambridge University, and ShanghaiTech University, they train a system to analyze audio recordings of an object being dropped, to infer properties such as the object’s shape, its composition, and the height from which it fell. Again, the system is trained to produce an abstract representation of the object, which, in turn, it uses to synthesize the sound the object would make when dropped from a particular height. The system’s performance is judged on the similarity between the synthesized sound and the source sound.

Finally, in their fourth paper, Wu, Tenenbaum, Freeman, and colleagues at DeepMind and Oxford University describe a system that begins to model humans’ intuitive understanding of the physical forces acting on objects in the world. This paper picks up where the previous papers leave off: It assumes that the system has already deduced objects’ 3-D shapes.

Those shapes are simple: balls and cubes. The researchers trained their system to perform two tasks. The first is to estimate the velocities of balls traveling on a billiard table and, on that basis, to predict how they will behave after a collision. The second is to analyze a static image of stacked cubes and determine whether they will fall and, if so, where the cubes will land.

Wu developed a representational language he calls scene XML that can quantitatively characterize the relative positions of objects in a visual scene. The system first learns to describe input data in that language. It then feeds that description to something called a physics engine, which models the physical forces acting on the represented objects. Physics engines are a staple of both computer animation, where they generate the movement of clothing, falling objects, and the like, and of scientific computing, where they’re used for large-scale physical simulations.

After the physics engine has predicted the motions of the balls and boxes, that information is fed to a graphics engine, whose output is, again, compared with the source images. As with the work on visual discrimination, the researchers train their system on synthetic data before refining it with real data.

In tests, the researchers’ system again outperformed its predecessors. In fact, in some of the tests involving billiard balls, it frequently outperformed human observers as well.

FT 300 force torque sensor

Robotiq Updates FT 300 Sensitivity For High Precision Tasks With Universal Robots
Force Torque Sensor feeds data to Universal Robots force mode

Quebec City, Canada, November 13, 2017 – Robotiq launches a 10 times more sensitive version of its FT 300 Force Torque Sensor. With Plug + Play integration on all Universal Robots, the FT 300 performs highly repeatable precision force control tasks such as finishing, product testing, assembly and precise part insertion.

This force torque sensor comes with an updated free URCap software able to feed data to the Universal Robots Force Mode. “This new feature allows the user to perform precise force insertion assembly and many finishing applications where force control with high sensitivity is required” explains Robotiq CTO Jean-Philippe Jobin*.

The URCap also includes a new calibration routine. “We’ve integrated a step-by-step procedure that guides the user through the process, which takes less than 2 minutes” adds Jobin. “A new dashboard also provides real-time force and moment readings on all 6 axes. Moreover, pre-built programming functions are now embedded in the URCap for intuitive programming.”

See some of the FT 300’s new capabilities in the following demo videos:

#1 How to calibrate with the FT 300 URCap Dashboard

#2 Linear search  demo

#3 Path recording demo

Visit the FT 300 webpage  or get a quote here

Get the FT 300 specs here

Get more info in the FAQ

Get free Skills to accelerate robot programming of force control tasks.

Get free robot cell deployment resources on leanrobotics.org

* Available with Universal Robots CB3.1 controller only

About Robotiq

Robotiq’s Lean Robotics methodology and products enable manufacturers to deploy productive robot cells across their factory. They leverage the Lean Robotics methodology for faster time to production and increased productivity from their robots. Production engineers standardize on Robotiq’s Plug + Play components for their ease of programming, built-in integration, and adaptability to many processes. They rely on the Flow software suite to accelerate robot projects and optimize robot performance once in production.

Robotiq is the humans behind the robots: an employee-owned business with a passionate team and an international partner network.

Media contact

David Maltais, Communications and Public Relations Coordinator

d.maltais@robotiq.com

1-418-929-2513

////

Press Release Provided by: Robotiq.Com

The post FT 300 force torque sensor appeared first on Roboticmagazine.

Swiss drone industry map

The Swiss Drone Industry Map above (click for larger image) is an attempt to list and group all companies and institutions in Switzerland that provide a product or service that makes commercial operations of Unmanned Aerial Vehicles possible. An entity can only appear in one box (i.e. main activity) and must be publicly promoting existing or future products and services. Swiss drone research centres, system integrators and pilot schools are not part of the map. Corrections, suggestions and new submissions are welcome!

I’ve added all the links below so you can easily click through and learn more.

Manufactures
SwissDrones
senseFly
Flyability
Fotokite
Sunflower Labs
Voliro
AgroFly
Skybotix
Aeroscout
Wingtra
Flying Robots

Flight Systems
PX4-Pro
MotionPilot
WindShape
weControl
Rapyuta Robotics
Daedalean
UAVenture

Governance
FOCA
Drohnen Verband
Zurich
DroneLogbook
UAWaero
SGS
Swiss Re

Traffic Management
Gobal UTM Association
SITA
Flarm
Skyguide
SkySoft
OneSky

Defense & Security
ViaSat
RUAG
Rheinmettall
UMS Skeldar
Aurora Swiss Aerospace
Kudelski
IDQ

Sensors & Payload
Terabee
uBlox
Distran
FixPosition
SkyAware
Insightness
Sensirion
Sensima Technology

Electric & Solar Propulsion
Evolaris Aviation
AtlantikSolar
MecaPlex
H55
Maxon Motor
Faulhaber

Delivery
Matternet – Swiss Post
Dronistics
Redline – Droneport
Deldro

Energy
TwingTec
SkyPull

Analytics
Picterra
Pix4D
Gamaya
MeteoMatics

Entertainment
Verity Studios
AEROTAIN
Anabatic.Aero
LémanTech

Humanitarian
WeRobotics
FSD
Redog
Swiss Fang

High-Altitude
SolarStratos
OpenStratosphere

Robots solving climate change

The two biggest societal challenges for the twenty-first century are also the biggest opportunities – automation and climate change. The confluence of these forces of mankind and nature intersect beautifully in the alternative energy market. The epitaph of fossil fuels with its dark cloud burning a hole in the ozone layer is giving way to a rise of solar and wind farms worldwide. Servicing these plantations are fleets of robots and drones, providing greater possibilities of expanding CleanTech to the most remote regions of the planet.

As 2017 comes to end, the solar industry for the first time in ten years has plateaued due to the proposed budget cuts by the Trump administration. Solar has had quite a run with an average annual growth rate of more than 65% for the past decade promoted largely by federal subsidies. The progressive policy of the Obama administration made the US a leader in alternative energy, resulting in a quarter-million new jobs. While the Federal Government now re-embraces the antiquated allure of fossil fuels, the global demand for solar has been rising as infrastructure costs decline by more than half, providing new opportunities without government incentives.

Prior to the renewal energy boom, unattractive roof tiles were the the most visible image of solar. While Elon Musk, and others are developing more aesthetically pleasing roofing materials, the business model of house-by-house conversion has been proven inefficient. Instead, the industry is focusing on “utility-scale” solar farms that will be connected to the national grid. Until recently, such farms have been straddled with ballooning servicing costs.

In a report published last month, leading energy risk management company DNV GL exclaimed that the alternative energy market could benefit greatly by utilizing Artificial Intelligence (AI) and robotics in designing, developing, deploying and maintaining utility farms. The study “Making Renewables Smarter: The Benefits, Risks, And Future of Artificial Intelligence In Solar And Wind” cited that “fields of resource forecasting, control and predictive maintenance” are ripe for tech disruption. Elizabeth Traiger, co-author of the report, explained, “Solar and wind developers, operators, and investors need to consider how their industries can use it, what the impacts are on the industries in a larger sense, and what decisions those industries need to confront.”

Since solar farms are often located in arid, dusty locations, one of the earliest use cases for unmanned systems was self-cleaning robots. As reported in 2014, Israeli company Ecoppia developed a patented waterless panel-washing platform to keep solar up and running in the desert. Today, Ecoppia is cleaning 10 million panels a month. Eran Meller, Chief Executive of Ecoppia, boasts, “We’re pleased to bring the experience gained over four years of cleaning in multiple sites in the Middle East. Cleaning 80 million solar panels in the harshest desert conditions globally, we expect to continue to play a leading role in this growing market.”

Since Ecoppia began selling commercially, there have been other entries into the unmanned maintenance space. This past March, Exosun became the latest to offer autonomous cleaning bots. The track equipment manufacturer claims that robotic systems can cut production losses by 2%, promising a return on investment within 18 months. After their acquisition of Greenbotics in 2013, US-based SunPower also launched its own mechanized cleaning platform, the Oasis, which combines mobile robots and drones.

SunPower brags that its products are ten times faster than traditional (manual) methods using 75% less water. While SunPower and Exosun leverage their large sales footprint with their existing servicing and equipment networks, Ecoppia is still the product leader. Its proprietary waterless solution offers the most cost-effective and connected solution on the market. Via a robust cloud network, Ecoppia can sense weather fluctuations to automatically schedule emergency cleanings. Anat Cohen Segev, Director of Marketing, explains, “Within seconds, we would detect a dust storm hitting the site, the master control will automatically suggest an additional cleaning cycle and within a click the entire site will be cleaned.” According to Segev, the robots remove 99% of the dust on the panels.

Drone companies are also entering the maintenance space. Upstart Aerial Power claims to have designed a “SolarBrush” quadcopter that cleans panels. The solar-powered drone professes to reduce 60% of a solar farm’s operational costs. Solar Brush also promises an 80% savings over existing solutions like Ecoppia since there are no installation costs. However, Aerial Power has yet to fly its product in the field as it is still in development. SolarPower is selling its own drone survey platform to assess development sites and oversee field operations. Matt Campbell, Vice President of Power Plant Products for SunPower, stated “A lot of the beginning of the solar industry was focused on the panel. Now we’re looking at innovation all around the rest of the system. That’s why we’re always surveying new technology — whether it’s a robot, whether it’s a drone, whether it’s software — and saying, ‘How can this help us reduce the cost of solar, build projects faster, and make them more reliable?’”

In 2008, The US Department of Energy published an ambitious proposal to have “20% Wind Energy by 2030: Increasing Wind Energy’s Contribution to U.S. Electricity Supply.” Presently at thirteen years before the goal, less than 5% of US energy is derived from wind. Developing wind farms is not novel, however to achieve 20% by 2030 the US needs to begin looking offshore. To put it in perspective, oceanic wind farms could generate more than 2,000 gigawatts of clean, carbon-free energy, or twice as much electricity as Americans currently consume. To date, there is only one wind farm operating off the coast of the United States. While almost every coastal state has proposals for offshore farms, the industry has been stalled by politics and servicing hurdles in dangerous waters.

For more than a decade the United Kingdom has led the development of offshore wind farms. At the University of Manchester, a leading group of researchers has been exploring a number of AI, robotic and drone technologies for remote inspections. The consortium of academics estimates that these technologies could generate more than $2.5 billion by 2025 in just the UK alone. The global offshore market could reach $17 billion by 2020, with 80% of the costs from operations and maintenance.

Last month, Innovate UK awarded $1.6 million to Perceptual Robotics and VulcanUAV to incorporate drones and autonomous boats into ocean inspections. These startups follow the business model of successful US inspection upstarts, like SkySpecs. Launched three years ago, SkySpecs’ autonomous drones claim to reduce turbine inspections from days to minutes. Danny Ellis, SkySpecs Chief Executive, claims “Customers that could once inspect only one-third of a wind farm can now do the whole farm in the same amount of time.” Last year, British startup Visual Working accomplished the herculean feat of surpassing 2000 blade inspections.

In the words of Paolo Brianzoni, Chief Executive of Visual Working: “We are not talking about what we intend to accomplish in the near future – but actually performing our UAV inspection service every day out there. Many in the industry are using considerable amount of time discussing and testing how to use UAV inspections in a safe and efficient way. We have passed that point and have alone in the first half of 2016 inspected 250 turbines in the North Sea averaging more then 10WTG per day, and still keeping to the “highest quality standards.’”

This past summer, Europe achieved another clean-energy milestone with the announcement of three new offshore wind farms for the first time without government subsidies. By bringing down the cost structure, autonomous systems are turning the tide of alternate energy regardless of government investment. Three days before leaving office, President Barack Obama wrote in the journal Science last year that “Evidence is mounting that any economic strategy that ignores carbon pollution will impose tremendous costs to the global economy and will result in fewer jobs and less economic growth over the long term.” He declared that it is time to move past common misconceptions that climate policy is at odds with business, “rather, it can boost efficiency, productivity, and innovation.”

Vecna Robotics Wins DHL & Dell Robotics Innovation Challenge 2017 with Tote Retrieval System

Vecna Robotics, a leader in intelligent, next-generation, robotic material handling autonomous ground vehicles (AGVs), was awarded first place in the DHL & Dell Robotics Mobile Picking Challenge 2017. The event was held last week at the DHL Innovation Center in Troisdorf, Ge

Three very different startups vie for “Robohub Choice”

Three very different robotics startups have been battling it out over the last week to win the “Robohub Choice” award in our annual startup competition. One was social, one was medical and one was agricultural! Also, one was from the UK, one was from the Ukraine and one was from Canada. Although nine startups entered the voting, it was clear from the start that it was a three horse race – thanks to our Robohub readers and the social media efforts of the startups.

The most popular startup was UniExo with 70.6% of the vote, followed by BotsAndUs on 14.8% and Northstar Robotics on 13.2%.

These three startups will be able to spend time in the Silicon Valley Robotics Accelerator/Cowork Space in Oakland, and we hope to have a feature about each startup on Robohub over the coming year. The overall winner(s) of the Robot Launch 2017 competition will be announced on December 15. The grand prize is investment of up to $500,000 from The Robotics Hub, while all award winners get access to the Silicon Valley Robotics accelerator program and cowork space.

UniExo | ukraine 

UniExo aims to help people with injuries and movement problems to restore the motor functions of their bodies with modular robotic exoskeleton devices, without additional help of doctors.

Thanks to our device, with its advantages, we can help these users in rehabilitation. The use of the product provides free movement for people with disabilities in a comfortable and safe form for them, without the use of outside help, as well as people in the post-opined period, post-traumatic state, being on rehabilitation.

We can give a second chance to people for a normal life, and motivate to do things for our world that can help other people.

https://youtu.be/kjHN35zasvE

BotsAndUs | uk (@botsandus)

BotsAndUs believe in humans and robots collaborating towards a better life. Our aim is to create physical and emotional comfort with robots to support wide adoption.

In May ‘17 we launched Bo, a social robot for events, hospitality and retail. Bo approaches you in shops, hotels or hospitals, finds out what you need, takes you to it and gives you tips on the latest bargains.

In a short time the business has grown considerably: global brands as customers (British Telecom, Etisalat, Dixons), a Government award for our Human-Robot-Interaction tech, members of Nvidia’s Inception program and intuAccelerate (bringing Bo to UK’s top 10 malls), >15k Bo interactions.

https://youtu.be/jrLaoKShKT4

Northstar Robotics | canada (@northstarrobot)

Northstar Robotics is an agricultural technology company that was founded by an experienced farmer and robotics engineer.

Our vision is to create the fully autonomous farm which will address the labour shortage problem and lower farm input costs.  We will make this vision a reality by first providing an open hardware and software platform to allow current farm equipment to become autonomous.  In parallel, we are going to build super awesome robots that will transform farming and set the standard for what modern agricultural equipment should be.

https://youtu.be/o2C4Cx-m2es

 

 

Reading a neural network’s mind

Neural nets are so named because they roughly approximate the structure of the human brain. Typically, they’re arranged into layers, and each layer consists of many simple processing units — nodes — each of which is connected to several nodes in the layers above and below. Data is fed into the lowest layer, whose nodes process it and pass it to the next layer. The connections between layers have different “weights,” which determine how much the output of any one node figures into the calculation performed by the next.
Image: Chelsea Turner/MIT

By Larry Hardesty

Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.

During training, however, a neural net continually adjusts its internal settings in ways that even its creators can’t interpret. Much recent work in computer science has focused on clever techniques for determining just how neural nets do what they do.

In several recent papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Computing Research Institute have used a recently developed interpretive technique, which had been applied in other areas, to analyze neural networks trained to do machine translation and speech recognition.

They find empirical support for some common intuitions about how the networks probably work. For example, the systems seem to concentrate on lower-level tasks, such as sound recognition or part-of-speech recognition, before moving on to higher-level tasks, such as transcription or semantic interpretation.

But the researchers also find a surprising omission in the type of data the translation network considers, and they show that correcting that omission improves the network’s performance. The improvement is modest, but it points toward the possibility that analysis of neural networks could help improve the accuracy of artificial intelligence systems.

“In machine translation, historically, there was sort of a pyramid with different layers,” says Jim Glass, a CSAIL senior research scientist who worked on the project with Yonatan Belinkov, an MIT graduate student in electrical engineering and computer science. “At the lowest level there was the word, the surface forms, and the top of the pyramid was some kind of interlingual representation, and you’d have different layers where you were doing syntax, semantics. This was a very abstract notion, but the idea was the higher up you went in the pyramid, the easier it would be to translate to a new language, and then you’d go down again. So part of what Yonatan is doing is trying to figure out what aspects of this notion are being encoded in the network.”

The work on machine translation was presented recently in two papers at the International Joint Conference on Natural Language Processing. On one, Belinkov is first author, and Glass is senior author, and on the other, Belinkov is a co-author. On both, they’re joined by researchers from the Qatar Computing Research Institute (QCRI), including Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and Stephan Vogel. Belinkov and Glass are sole authors on the paper analyzing speech recognition systems, which Belinkov presented at the Neural Information Processing Symposium last week.

Leveling down

Neural nets are so named because they roughly approximate the structure of the human brain. Typically, they’re arranged into layers, and each layer consists of many simple processing units — nodes — each of which is connected to several nodes in the layers above and below. Data are fed into the lowest layer, whose nodes process it and pass it to the next layer. The connections between layers have different “weights,” which determine how much the output of any one node figures into the calculation performed by the next.

During training, the weights between nodes are constantly readjusted. After the network is trained, its creators can determine the weights of all the connections, but with thousands or even millions of nodes, and even more connections between them, deducing what algorithm those weights encode is nigh impossible.

The MIT and QCRI researchers’ technique consists of taking a trained network and using the output of each of its layers, in response to individual training examples, to train another neural network to perform a particular task. This enables them to determine what task each layer is optimized for.

In the case of the speech recognition network, Belinkov and Glass used individual layers’ outputs to train a system to identify “phones,” distinct phonetic units particular to a spoken language. The “t” sounds in the words “tea,” “tree,” and “but,” for instance, might be classified as separate phones, but a speech recognition system has to transcribe all of them using the letter “t.” And indeed, Belinkov and Glass found that lower levels of the network were better at recognizing phones than higher levels, where, presumably, the distinction is less important.

Similarly, in an earlier paper, presented last summer at the Annual Meeting of the Association for Computational Linguistics, Glass, Belinkov, and their QCRI colleagues showed that the lower levels of a machine-translation network were particularly good at recognizing parts of speech and morphology — features such as tense, number, and conjugation.

Making meaning

But in the new paper, they show that higher levels of the network are better at something called semantic tagging. As Belinkov explains, a part-of-speech tagger will recognize that “herself” is a pronoun, but the meaning of that pronoun — its semantic sense — is very different in the sentences “she bought the book herself” and “she herself bought the book.” A semantic tagger would assign different tags to those two instances of “herself,” just as a machine translation system might find different translations for them in a given target language.

The best-performing machine-translation networks use so-called encoding-decoding models, so the MIT and QCRI researchers’ network uses it as well. In such systems, the input, in the source language, passes through several layers of the network — known as the encoder — to produce a vector, a string of numbers that somehow represent the semantic content of the input. That vector passes through several more layers of the network — the decoder — to yield a translation in the target language.

Although the encoder and decoder are trained together, they can be thought of as separate networks. The researchers discovered that, curiously, the lower layers of the encoder are good at distinguishing morphology, but the higher layers of the decoder are not. So Belinkov and the QCRI researchers retrained the network, scoring its performance according to not only accuracy of translation but also analysis of morphology in the target language. In essence, they forced the decoder to get better at distinguishing morphology.

Using this technique, they retrained the network to translate English into German and found that its accuracy increased by 3 percent. That’s not an overwhelming improvement, but it’s an indication that looking under the hood of neural networks could be more than an academic exercise.

#249: ICRA 2017 Company Showcase, with Li Bingbing, Xianbao Chen, Howard Michel and Lester Teh Chee Onn

Image: ICRA 2017

In this episode, Audrow Nash interviews several companies at the International Conference for Robotics and Automation (ICRA). ICRA is the IEEE Robotics and Automation Society’s biggest conference and one of the leading international forums for robotics researchers to present their work.

Interviews:

Li Bingbing, Software Engineer and Cofounder of Transforma in Singapore, on a robot for painting tall buildings.

Xianbao Chen, Associate Researcher at Shanghai Jiao Tong University, on a hexapod robot for large parts machining.

Howard Michel, Chief Technology Officer of UBTech Education, on a humanoid robot for STEM (Science Technology Engineering, and Math) education for a range of ages.

Lester Teh Chee Onn, Environmental Engineer at Advisian, on a watercraft for environmental monitoring.

 

Links

Robohub Podcast is on Patreon!

Robohub Podcast has launched a campaign on Patreon!

If you don’t know, Robohub Podcast is a biweekly podcast about robotics. Our goal is to explore global robotics through interviews with experts, both in academia and industry. In our interviews,

  • we discuss technical topics (how things work, design decisions),
  • entrepreneurship (lessons learned, business models, ownership),
  • and anything we find interesting and related to robotics (policy, ethics, global trends, international technology initiatives and education, etc.).

We have published nearly 250 episodes and have spoken with many of the most influential people in robotics, such as Rodney Brooks, Dean Kamen, Radhika Nagpal, and Helen Griener.

We would like your support so we can bring you interviews from the leading robotics conferences and laboratories around the world. Our first goal is to send two interviewers to ICRA 2018 in Brisbane, Australia.

If you want to support us, visit our Patreon campaign.

 

FaSTrack: Ensuring safe real-time navigation of dynamic systems


By Sylvia Herbert, David Fridovich-Keil, and Claire Tomlin

The Problem: Fast and Safe Motion Planning

Real time autonomous motion planning and navigation is hard, especially when we care about safety. This becomes even more difficult when we have systems with complicated dynamics, external disturbances (like wind), and a priori unknown environments. Our goal in this work is to “robustify” existing real-time motion planners to guarantee safety during navigation of dynamic systems.

In control theory there are techniques like Hamilton-Jacobi Reachability Analysis that provide rigorous safety guarantees of system behavior, along with an optimal controller to reach a given goal (see Fig. 1). However, in general the computational methods used in HJ Reachability Analysis are only tractable in decomposable and/or low-dimensional systems; this is due to the “curse of dimensionality.” That means for real time planning we can’t process safe trajectories for systems of more than about two dimensions. Since most real-world system models like cars, planes, and quadrotors have more than two dimensions, these methods are usually intractable in real time.

On the other hand, geometric motion planners like rapidly-exploring random trees (RRT) and model-predictive control (MPC) can plan in real time by using simplified models of system dynamics and/or a short planning horizon. Although this allows us to perform real time motion planning, the resulting trajectories may be overly simplified, lead to unavoidable collisions, and may even be dynamically infeasible (see Fig. 1). For example, imagine riding a bike and following the path on the ground traced by a pedestrian. This path leads you straight towards a tree and then takes a 90 degree turn away at the last second. You can’t make such a sharp turn on your bike, and instead you end up crashing into the tree. Classically, roboticists have mitigated this issue by pretending obstacles are slightly larger than they really are during planning. This greatly improves the chances of not crashing, but still doesn’t provide guarantees and may lead to unanticipated collisions.

So how do we combine the speed of fast planning with the safety guarantee of slow planning?

fig1
Figure 1. On the left we have a high-dimensional vehicle moving through an obstacle course to a goal. Computing the optimal safe trajectory is a slow and sometimes intractable task, and replanning is nearly impossible. On the right we simplify our model of the vehicle (in this case assuming it can move in straight lines connected at points). This allows us to plan very quickly, but when we execute the planned trajectory we may find that we cannot actually follow the path exactly, and end up crashing.

The Solution: FaSTrack

FaSTrack: Fast and Safe Tracking, is a tool that essentially “robustifies” fast motion planners like RRT or MPC while maintaining real time performance. FaSTrack allows users to implement a fast motion planner with simplified dynamics while maintaining safety in the form of a precomputed bound on the maximum possible distance between the planner’s state and the actual autonomous system’s state at runtime. We call this distance the tracking error bound. This precomputation also results in an optimal control lookup table which provides the optimal error-feedback controller for the autonomous system to pursue the online planner in real time.

fig2
Figure 2. The idea behind FaSTrack is to plan using the simplified model (blue), but precompute a tracking error bound that captures all potential deviations of the trajectory due to model mismatch and environmental disturbances like wind, and an error-feedback controller to stay within this bound. We can then augment our obstacles by the tracking error bound, which guarantees that our dynamic system (red) remains safe. Augmenting obstacles is not a new idea in the robotics community, but by using our tracking error bound we can take into account system dynamics and disturbances.

Offline Precomputation

We precompute this tracking error bound by viewing the problem as a pursuit-evasion game between a planner and a tracker. The planner uses a simplified model of the true autonomous system that is necessary for real time planning; the tracker uses a more accurate model of the true autonomous system. We assume that the tracker — the true autonomous system — is always pursuing the planner. We want to know what the maximum relative distance (i.e. maximum tracking error) could be in the worst case scenario: when the planner is actively attempting to evade the tracker. If we have an upper limit on this bound then we know the maximum tracking error that can occur at run time.

fig3
Figure 3. Tracking system with complicated model of true system dynamics tracking a planning system that plans with a very simple model.

Because we care about maximum tracking error, we care about maximum relative distance. So to solve this pursuit-evasion game we must first determine the relative dynamics between the two systems by fixing the planner at the origin and determining the dynamics of the tracker relative to the planner. We then specify a cost function as the distance to this origin, i.e. relative distance of tracker to the planner, as seen in Fig. 4. The tracker will try to minimize this cost, and the planner will try to maximize it. While evolving these optimal trajectories over time, we capture the highest cost that occurs over the time period. If the tracker can always eventually catch up to the planner, this cost converges to a fixed cost for all time.

The smallest invariant level set of the converged value function provides determines the tracking error bound, as seen in Fig. 5. Moreover, the gradient of the converged value function can be used to create an optimal error-feedback control policy for the tracker to pursue the planner. We used Ian Mitchell’s Level Set Toolbox and Reachability Analysis to solve this differential game. For a more thorough explanation of the optimization, please see our recent paper from the 2017 IEEE Conference on Decision and Control.

gif4 gif5
Figures 4 & 5: On the left we show the value function initializing at the cost function (distance to origin) and evolving according to the differential game. On the right we should 3D and 2D slices of this value function. Each slice can be thought of as a “candidate tracking error bound.” Over time, some of these bounds become infeasible to stay within. The smallest invariant level set of the converged value function provides us with the tightest tracking error bound that is feasible.

Online real time Planning

In the online phase, we sense obstacles within a given sensing radius and imagine expanding these obstacles by the tracking error bound with a Minkowski sum. Using these padded obstacles, the motion planner decides its next desired state. Based on that relative state between the tracker and planner, the optimal control for the tracker (autonomous system) is determined from the lookup table. The autonomous system executes the optimal control, and the process repeats until the goal has been reached. This means that the motion planner can continue to plan quickly, and by simply augmenting obstacles and using a lookup table for control we can ensure safety!

gif6
Figure 6. MATLAB simulation of a 10D near-hover quadrotor model (blue line) “pursuing” a 3D planning model (green dot) that is using RRT to plan. As new obstacles are discovered (turning red), the RRT plans a new path towards the goal. Based on the relative state between the planner and the autonomous system, the optimal control can be found via look-up table. Even when the RRT planner makes sudden turns, we are guaranteed to stay within the tracking error bound (blue box).

Reducing Conservativeness through Meta-Planning

One consequence of formulating the safe tracking problem as a pursuit-evasion game between the planner and the tracker is that the resulting safe tracking bound is often rather conservative. That is, the tracker can’t guarantee that it will be close to the planner if the planner is always allowed to do the worst possible behavior. One solution is to use multiple planning models, each with its own tracking error bound, simultaneously at planning time. The resulting “meta-plan” is comprised of trajectory segments computed by each planner, each labelled with the appropriate optimal controller to track trajectories generated by that planner. This is illustrated in Fig. 7, where the large blue error bound corresponds to a planner which is allowed to move very quickly and the small red bound corresponds to a planner which moves more slowly.

fig7
Figure 7. By considering two different planners, each with a different tracking error bound, our algorithm is able to find a guaranteed safe “meta-plan” that prefers the less precise but faster-moving blue planner but reverts to the more precise but slower red planner in the vicinity of obstacles. This leads to natural, intuitive behavior that optimally trades off planner conservatism with vehicle maneuvering speed.

Safe Switching

The key to making this work is to ensure that all transitions between planners are safe. This can get a little complicated, but the main idea is that a transition between two planners — call them A and B — is safe if we can guarantee that the invariant set computed for A is contained within that for B. For many pairs of planners this is true, e.g. switching from the blue bound to the red bound in Fig. 7. But often it is not. In general, we need to solve a dynamic game very similar to the original one in FaSTrack, but where we want to know the set of states that we will never leave and from which we can guarantee we end up inside B’s invariant set. Usually, the resulting safe switching bound (SSB) is slightly larger than A’s tracking error bound (TEB), as shown below.

fig8
Figure 8. The safe switching bound for a transition between a planner with a large tracking error bound to one with a small tracking error bound is generally larger than the large tracking error bound, as shown.

Efficient Online Meta-Planning

To do this efficiently in real time, we use a modified version of the classical RRT algorithm. Usually, RRTs work by sampling points in state space and connecting them with line segments to form a tree rooted at the start point. In our case, we replace the line segments with the actual trajectories generated by individual planners. In order to find the shortest route to the goal, we favor planners that can move more quickly, trying them first and only resorting to slower-moving planners if the faster ones fail.

We do have to be careful to ensure safe switching bounds are satisfied, however. This is especially important in cases where the meta-planner decides to transition to a more precise, slower-moving planner, as in the example above. In such cases, we implement a one-step virtual backtracking algorithm in which we make sure the preceding trajectory segment is collision-free using the switching controller.

Implementation

We implemented both FaSTrack and Meta-Planning in C++ / ROS, using low-level motion planners from the Open Motion Planning Library (OMPL). Simulated results are shown below, with (right) and without (left) our optimal controller. As you can see, simply using a linear feedback (LQR) controller (left) provides no guarantees about staying inside the tracking error bound.

fig09 fig10
Figures 9 & 10. (Left) A standard LQR controller is unable to keep the quadrotor within the tracking error bound. (Right) The optimal tracking controller keeps the quadrotor within the tracking bound, even during radical changes in the planned trajectory.

It also works on hardware! We tested on the open-source Crazyflie 2.0 quadrotor platform. As you can see in Fig. 12, we manage to stay inside the tracking bound at all times, even when switching planners.

f11 f12
Figures 11 & 12. (Left) A Crazyflie 2.0 quadrotor being observed by an OptiTrack motion capture system. (Right) Position traces from a hardware test of the meta planning algorithm. As shown, the tracking system stays within the tracking error bound at all times, even during the planner switch that occurs approximately 4.5 seconds after the start.

This article was initially published on the BAIR blog, and appears here with the authors’ permission.

This post is based on the following papers:

  • FaSTrack: a Modular Framework for Fast and Guaranteed Safe Motion Planning
    Sylvia Herbert*, Mo Chen*, SooJean Han, Somil Bansal, Jaime F. Fisac, and Claire J. Tomlin
    Paper, Website

  • Planning, Fast and Slow: A Framework for Adaptive Real-Time Safe Trajectory Planning
    David Fridovich-Keil*, Sylvia Herbert*, Jaime F. Fisac*, Sampada Deglurkar, and Claire J. Tomlin
    Paper, Github (code to appear soon)

We would like to thank our coauthors; developing FaSTrack has been a team effort and we are incredibly fortunate to have a fantastic set of colleagues on this project.

Artificial muscles give soft robots superpowers

Origami-inspired artificial muscles are capable of lifting up to 1,000 times their own weight, simply by applying air or water pressure. Credit: Shuguang Li / Wyss Institute at Harvard University

By Lindsay Brownell

Soft robotics has made leaps and bounds over the last decade as researchers around the world have experimented with different materials and designs to allow once rigid, jerky machines to bend and flex in ways that mimic and can interact more naturally with living organisms. However, increased flexibility and dexterity has a trade-off of reduced strength, as softer materials are generally not as strong or resilient as inflexible ones, which limits their use.

Now, researchers at the Wyss Institute at Harvard University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created origami-inspired artificial muscles that add strength to soft robots, allowing them to lift objects that are up to 1,000 times their own weight using only air or water pressure, giving much-needed strength to soft robots. The study is published this week in Proceedings of the National Academy of Sciences (PNAS).

“We were very surprised by how strong the actuators [aka, “muscles”] were. We expected they’d have a higher maximum functional weight than ordinary soft robots, but we didn’t expect a thousand-fold increase. It’s like giving these robots superpowers,” says Daniela Rus, Ph.D., the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and one of the senior authors of the paper.

“Artificial muscle-like actuators are one of the most important grand challenges in all of engineering,” adds  Rob Wood, Ph.D., corresponding author of the paper and Founding Core Faculty member of the Wyss Institute, who is also the Charles River Professor of Engineering and Applied Sciences at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS). “Now that we have created actuators with properties similar to natural muscle, we can imagine building almost any robot for almost any task.”

Each artificial muscle consists of an inner “skeleton” that can be made of various materials, such as a metal coil or a sheet of plastic folded into a certain pattern, surrounded by air or fluid and sealed inside a plastic or textile bag that serves as the “skin.” A vacuum applied to the inside of the bag initiates the muscle’s movement by causing the skin to collapse onto the skeleton, creating tension that drives the motion. Incredibly, no other power source or human input is required to direct the muscle’s movement; it is determined entirely by the shape and composition of the skeleton.

“One of the key aspects of these muscles is that they’re programmable, in the sense that designing how the skeleton folds defines how the whole structure moves. You essentially get that motion for free, without the need for a control system,” says first author Shuguang Li, Ph.D., a Postdoctoral Fellow at the Wyss Institute and MIT CSAIL. This approach allows the muscles to be very compact and simple, and thus more appropriate for mobile or body-mounted systems that cannot accommodate large or heavy machinery.

Artificial muscle-like actuators are one of the most important grand challenges in all of engineering. Robert Wood

“When creating robots, one always has to ask, ‘Where is the intelligence – is it in the body, or in the brain?’” says Rus. “Incorporating intelligence into the body (via specific folding patterns, in the case of our actuators) has the potential to simplify the algorithms needed to direct the robot to achieve its goal. All these actuators have the same simple on/off switch, which their bodies then translate into a broad range of motions.”

The team constructed dozens of muscles using materials ranging from metal springs to packing foam to sheets of plastic, and experimented with different skeleton shapes to create muscles that can contract down to 10% of their original size, lift a delicate flower off the ground, and twist into a coil, all simply by sucking the air out of them.

The structural geometry of artificial muscle skeleton determines the muscle’s motion. Credit: Shuguang Li / Wyss Institute at Harvard University

Not only can the artificial muscles move in many ways, they do so with impressive resilience. They can generate about six times more force per unit area than mammalian skeletal muscle can, and are also incredibly lightweight; a 2.6-gram muscle can lift a 3-kilogram object, which is the equivalent of a mallard duck lifting a car. Additionally, a single muscle can be constructed within ten minutes using materials that cost less than $1, making them cheap and easy to test and iterate.

These muscles can be powered by a vacuum, a feature that makes them safer than most of the other artificial muscles currently being tested. “A lot of the applications of soft robots are human-centric, so of course it’s important to think about safety,” says Daniel Vogt, M.S., co-author of the paper and Research Engineer at the Wyss Institute. “Vacuum-based muscles have a lower risk of rupture, failure, and damage, and they don’t expand when they’re operating, so you can integrate them into closer-fitting robots on the human body.”

“In addition to their muscle-like properties, these soft actuators are highly scalable. We have built them at sizes ranging from a few millimeters up to a meter, and their performance holds up across the board,” Wood says. This feature means that the muscles can be used in numerous applications at multiple scales, such as miniature surgical devices, wearable robotic exoskeletons, transformable architecture, deep-sea manipulators for research or construction, and large deployable structures for space exploration.

The team was even able to construct the muscles out of the water-soluble polymer PVA, which opens the possibility of robots that can perform tasks in natural settings with minimal environmental impact, as well as ingestible robots that move to the proper place in the body and then dissolve to release a drug. “The possibilities really are limitless. But the very next thing I would like to build with these muscles is an elephant robot with a trunk that can manipulate the world in ways that are as flexible and powerful as you see in real elephants,” Rus says.

“The actuators developed through this collaboration between the Wood laboratory at Harvard and Rus group at MIT exemplify the Wyss’ approach of taking inspiration from nature without being limited by its conventions, which can result in systems that not only imitate nature, but surpass it,” says the Wyss Institute’s Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at SEAS.

This research was funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Wyss Institute for Biologically Inspired Engineering.

Page 442 of 466
1 440 441 442 443 444 466