Page 407 of 428
1 405 406 407 408 409 428

FT 300 force torque sensor

Robotiq Updates FT 300 Sensitivity For High Precision Tasks With Universal Robots
Force Torque Sensor feeds data to Universal Robots force mode

Quebec City, Canada, November 13, 2017 – Robotiq launches a 10 times more sensitive version of its FT 300 Force Torque Sensor. With Plug + Play integration on all Universal Robots, the FT 300 performs highly repeatable precision force control tasks such as finishing, product testing, assembly and precise part insertion.

This force torque sensor comes with an updated free URCap software able to feed data to the Universal Robots Force Mode. “This new feature allows the user to perform precise force insertion assembly and many finishing applications where force control with high sensitivity is required” explains Robotiq CTO Jean-Philippe Jobin*.

The URCap also includes a new calibration routine. “We’ve integrated a step-by-step procedure that guides the user through the process, which takes less than 2 minutes” adds Jobin. “A new dashboard also provides real-time force and moment readings on all 6 axes. Moreover, pre-built programming functions are now embedded in the URCap for intuitive programming.”

See some of the FT 300’s new capabilities in the following demo videos:

#1 How to calibrate with the FT 300 URCap Dashboard

#2 Linear search  demo

#3 Path recording demo

Visit the FT 300 webpage  or get a quote here

Get the FT 300 specs here

Get more info in the FAQ

Get free Skills to accelerate robot programming of force control tasks.

Get free robot cell deployment resources on leanrobotics.org

* Available with Universal Robots CB3.1 controller only

About Robotiq

Robotiq’s Lean Robotics methodology and products enable manufacturers to deploy productive robot cells across their factory. They leverage the Lean Robotics methodology for faster time to production and increased productivity from their robots. Production engineers standardize on Robotiq’s Plug + Play components for their ease of programming, built-in integration, and adaptability to many processes. They rely on the Flow software suite to accelerate robot projects and optimize robot performance once in production.

Robotiq is the humans behind the robots: an employee-owned business with a passionate team and an international partner network.

Media contact

David Maltais, Communications and Public Relations Coordinator

d.maltais@robotiq.com

1-418-929-2513

////

Press Release Provided by: Robotiq.Com

The post FT 300 force torque sensor appeared first on Roboticmagazine.

Swiss drone industry map

The Swiss Drone Industry Map above (click for larger image) is an attempt to list and group all companies and institutions in Switzerland that provide a product or service that makes commercial operations of Unmanned Aerial Vehicles possible. An entity can only appear in one box (i.e. main activity) and must be publicly promoting existing or future products and services. Swiss drone research centres, system integrators and pilot schools are not part of the map. Corrections, suggestions and new submissions are welcome!

I’ve added all the links below so you can easily click through and learn more.

Manufactures
SwissDrones
senseFly
Flyability
Fotokite
Sunflower Labs
Voliro
AgroFly
Skybotix
Aeroscout
Wingtra
Flying Robots

Flight Systems
PX4-Pro
MotionPilot
WindShape
weControl
Rapyuta Robotics
Daedalean
UAVenture

Governance
FOCA
Drohnen Verband
Zurich
DroneLogbook
UAWaero
SGS
Swiss Re

Traffic Management
Gobal UTM Association
SITA
Flarm
Skyguide
SkySoft
OneSky

Defense & Security
ViaSat
RUAG
Rheinmettall
UMS Skeldar
Aurora Swiss Aerospace
Kudelski
IDQ

Sensors & Payload
Terabee
uBlox
Distran
FixPosition
SkyAware
Insightness
Sensirion
Sensima Technology

Electric & Solar Propulsion
Evolaris Aviation
AtlantikSolar
MecaPlex
H55
Maxon Motor
Faulhaber

Delivery
Matternet – Swiss Post
Dronistics
Redline – Droneport
Deldro

Energy
TwingTec
SkyPull

Analytics
Picterra
Pix4D
Gamaya
MeteoMatics

Entertainment
Verity Studios
AEROTAIN
Anabatic.Aero
LémanTech

Humanitarian
WeRobotics
FSD
Redog
Swiss Fang

High-Altitude
SolarStratos
OpenStratosphere

Robots solving climate change

The two biggest societal challenges for the twenty-first century are also the biggest opportunities – automation and climate change. The confluence of these forces of mankind and nature intersect beautifully in the alternative energy market. The epitaph of fossil fuels with its dark cloud burning a hole in the ozone layer is giving way to a rise of solar and wind farms worldwide. Servicing these plantations are fleets of robots and drones, providing greater possibilities of expanding CleanTech to the most remote regions of the planet.

As 2017 comes to end, the solar industry for the first time in ten years has plateaued due to the proposed budget cuts by the Trump administration. Solar has had quite a run with an average annual growth rate of more than 65% for the past decade promoted largely by federal subsidies. The progressive policy of the Obama administration made the US a leader in alternative energy, resulting in a quarter-million new jobs. While the Federal Government now re-embraces the antiquated allure of fossil fuels, the global demand for solar has been rising as infrastructure costs decline by more than half, providing new opportunities without government incentives.

Prior to the renewal energy boom, unattractive roof tiles were the the most visible image of solar. While Elon Musk, and others are developing more aesthetically pleasing roofing materials, the business model of house-by-house conversion has been proven inefficient. Instead, the industry is focusing on “utility-scale” solar farms that will be connected to the national grid. Until recently, such farms have been straddled with ballooning servicing costs.

In a report published last month, leading energy risk management company DNV GL exclaimed that the alternative energy market could benefit greatly by utilizing Artificial Intelligence (AI) and robotics in designing, developing, deploying and maintaining utility farms. The study “Making Renewables Smarter: The Benefits, Risks, And Future of Artificial Intelligence In Solar And Wind” cited that “fields of resource forecasting, control and predictive maintenance” are ripe for tech disruption. Elizabeth Traiger, co-author of the report, explained, “Solar and wind developers, operators, and investors need to consider how their industries can use it, what the impacts are on the industries in a larger sense, and what decisions those industries need to confront.”

Since solar farms are often located in arid, dusty locations, one of the earliest use cases for unmanned systems was self-cleaning robots. As reported in 2014, Israeli company Ecoppia developed a patented waterless panel-washing platform to keep solar up and running in the desert. Today, Ecoppia is cleaning 10 million panels a month. Eran Meller, Chief Executive of Ecoppia, boasts, “We’re pleased to bring the experience gained over four years of cleaning in multiple sites in the Middle East. Cleaning 80 million solar panels in the harshest desert conditions globally, we expect to continue to play a leading role in this growing market.”

Since Ecoppia began selling commercially, there have been other entries into the unmanned maintenance space. This past March, Exosun became the latest to offer autonomous cleaning bots. The track equipment manufacturer claims that robotic systems can cut production losses by 2%, promising a return on investment within 18 months. After their acquisition of Greenbotics in 2013, US-based SunPower also launched its own mechanized cleaning platform, the Oasis, which combines mobile robots and drones.

SunPower brags that its products are ten times faster than traditional (manual) methods using 75% less water. While SunPower and Exosun leverage their large sales footprint with their existing servicing and equipment networks, Ecoppia is still the product leader. Its proprietary waterless solution offers the most cost-effective and connected solution on the market. Via a robust cloud network, Ecoppia can sense weather fluctuations to automatically schedule emergency cleanings. Anat Cohen Segev, Director of Marketing, explains, “Within seconds, we would detect a dust storm hitting the site, the master control will automatically suggest an additional cleaning cycle and within a click the entire site will be cleaned.” According to Segev, the robots remove 99% of the dust on the panels.

Drone companies are also entering the maintenance space. Upstart Aerial Power claims to have designed a “SolarBrush” quadcopter that cleans panels. The solar-powered drone professes to reduce 60% of a solar farm’s operational costs. Solar Brush also promises an 80% savings over existing solutions like Ecoppia since there are no installation costs. However, Aerial Power has yet to fly its product in the field as it is still in development. SolarPower is selling its own drone survey platform to assess development sites and oversee field operations. Matt Campbell, Vice President of Power Plant Products for SunPower, stated “A lot of the beginning of the solar industry was focused on the panel. Now we’re looking at innovation all around the rest of the system. That’s why we’re always surveying new technology — whether it’s a robot, whether it’s a drone, whether it’s software — and saying, ‘How can this help us reduce the cost of solar, build projects faster, and make them more reliable?’”

In 2008, The US Department of Energy published an ambitious proposal to have “20% Wind Energy by 2030: Increasing Wind Energy’s Contribution to U.S. Electricity Supply.” Presently at thirteen years before the goal, less than 5% of US energy is derived from wind. Developing wind farms is not novel, however to achieve 20% by 2030 the US needs to begin looking offshore. To put it in perspective, oceanic wind farms could generate more than 2,000 gigawatts of clean, carbon-free energy, or twice as much electricity as Americans currently consume. To date, there is only one wind farm operating off the coast of the United States. While almost every coastal state has proposals for offshore farms, the industry has been stalled by politics and servicing hurdles in dangerous waters.

For more than a decade the United Kingdom has led the development of offshore wind farms. At the University of Manchester, a leading group of researchers has been exploring a number of AI, robotic and drone technologies for remote inspections. The consortium of academics estimates that these technologies could generate more than $2.5 billion by 2025 in just the UK alone. The global offshore market could reach $17 billion by 2020, with 80% of the costs from operations and maintenance.

Last month, Innovate UK awarded $1.6 million to Perceptual Robotics and VulcanUAV to incorporate drones and autonomous boats into ocean inspections. These startups follow the business model of successful US inspection upstarts, like SkySpecs. Launched three years ago, SkySpecs’ autonomous drones claim to reduce turbine inspections from days to minutes. Danny Ellis, SkySpecs Chief Executive, claims “Customers that could once inspect only one-third of a wind farm can now do the whole farm in the same amount of time.” Last year, British startup Visual Working accomplished the herculean feat of surpassing 2000 blade inspections.

In the words of Paolo Brianzoni, Chief Executive of Visual Working: “We are not talking about what we intend to accomplish in the near future – but actually performing our UAV inspection service every day out there. Many in the industry are using considerable amount of time discussing and testing how to use UAV inspections in a safe and efficient way. We have passed that point and have alone in the first half of 2016 inspected 250 turbines in the North Sea averaging more then 10WTG per day, and still keeping to the “highest quality standards.’”

This past summer, Europe achieved another clean-energy milestone with the announcement of three new offshore wind farms for the first time without government subsidies. By bringing down the cost structure, autonomous systems are turning the tide of alternate energy regardless of government investment. Three days before leaving office, President Barack Obama wrote in the journal Science last year that “Evidence is mounting that any economic strategy that ignores carbon pollution will impose tremendous costs to the global economy and will result in fewer jobs and less economic growth over the long term.” He declared that it is time to move past common misconceptions that climate policy is at odds with business, “rather, it can boost efficiency, productivity, and innovation.”

Vecna Robotics Wins DHL & Dell Robotics Innovation Challenge 2017 with Tote Retrieval System

Vecna Robotics, a leader in intelligent, next-generation, robotic material handling autonomous ground vehicles (AGVs), was awarded first place in the DHL & Dell Robotics Mobile Picking Challenge 2017. The event was held last week at the DHL Innovation Center in Troisdorf, Ge

Three very different startups vie for “Robohub Choice”

Three very different robotics startups have been battling it out over the last week to win the “Robohub Choice” award in our annual startup competition. One was social, one was medical and one was agricultural! Also, one was from the UK, one was from the Ukraine and one was from Canada. Although nine startups entered the voting, it was clear from the start that it was a three horse race – thanks to our Robohub readers and the social media efforts of the startups.

The most popular startup was UniExo with 70.6% of the vote, followed by BotsAndUs on 14.8% and Northstar Robotics on 13.2%.

These three startups will be able to spend time in the Silicon Valley Robotics Accelerator/Cowork Space in Oakland, and we hope to have a feature about each startup on Robohub over the coming year. The overall winner(s) of the Robot Launch 2017 competition will be announced on December 15. The grand prize is investment of up to $500,000 from The Robotics Hub, while all award winners get access to the Silicon Valley Robotics accelerator program and cowork space.

UniExo | ukraine 

UniExo aims to help people with injuries and movement problems to restore the motor functions of their bodies with modular robotic exoskeleton devices, without additional help of doctors.

Thanks to our device, with its advantages, we can help these users in rehabilitation. The use of the product provides free movement for people with disabilities in a comfortable and safe form for them, without the use of outside help, as well as people in the post-opined period, post-traumatic state, being on rehabilitation.

We can give a second chance to people for a normal life, and motivate to do things for our world that can help other people.

https://youtu.be/kjHN35zasvE

BotsAndUs | uk (@botsandus)

BotsAndUs believe in humans and robots collaborating towards a better life. Our aim is to create physical and emotional comfort with robots to support wide adoption.

In May ‘17 we launched Bo, a social robot for events, hospitality and retail. Bo approaches you in shops, hotels or hospitals, finds out what you need, takes you to it and gives you tips on the latest bargains.

In a short time the business has grown considerably: global brands as customers (British Telecom, Etisalat, Dixons), a Government award for our Human-Robot-Interaction tech, members of Nvidia’s Inception program and intuAccelerate (bringing Bo to UK’s top 10 malls), >15k Bo interactions.

https://youtu.be/jrLaoKShKT4

Northstar Robotics | canada (@northstarrobot)

Northstar Robotics is an agricultural technology company that was founded by an experienced farmer and robotics engineer.

Our vision is to create the fully autonomous farm which will address the labour shortage problem and lower farm input costs.  We will make this vision a reality by first providing an open hardware and software platform to allow current farm equipment to become autonomous.  In parallel, we are going to build super awesome robots that will transform farming and set the standard for what modern agricultural equipment should be.

https://youtu.be/o2C4Cx-m2es

 

 

Reading a neural network’s mind

Neural nets are so named because they roughly approximate the structure of the human brain. Typically, they’re arranged into layers, and each layer consists of many simple processing units — nodes — each of which is connected to several nodes in the layers above and below. Data is fed into the lowest layer, whose nodes process it and pass it to the next layer. The connections between layers have different “weights,” which determine how much the output of any one node figures into the calculation performed by the next.
Image: Chelsea Turner/MIT

By Larry Hardesty

Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.

During training, however, a neural net continually adjusts its internal settings in ways that even its creators can’t interpret. Much recent work in computer science has focused on clever techniques for determining just how neural nets do what they do.

In several recent papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Computing Research Institute have used a recently developed interpretive technique, which had been applied in other areas, to analyze neural networks trained to do machine translation and speech recognition.

They find empirical support for some common intuitions about how the networks probably work. For example, the systems seem to concentrate on lower-level tasks, such as sound recognition or part-of-speech recognition, before moving on to higher-level tasks, such as transcription or semantic interpretation.

But the researchers also find a surprising omission in the type of data the translation network considers, and they show that correcting that omission improves the network’s performance. The improvement is modest, but it points toward the possibility that analysis of neural networks could help improve the accuracy of artificial intelligence systems.

“In machine translation, historically, there was sort of a pyramid with different layers,” says Jim Glass, a CSAIL senior research scientist who worked on the project with Yonatan Belinkov, an MIT graduate student in electrical engineering and computer science. “At the lowest level there was the word, the surface forms, and the top of the pyramid was some kind of interlingual representation, and you’d have different layers where you were doing syntax, semantics. This was a very abstract notion, but the idea was the higher up you went in the pyramid, the easier it would be to translate to a new language, and then you’d go down again. So part of what Yonatan is doing is trying to figure out what aspects of this notion are being encoded in the network.”

The work on machine translation was presented recently in two papers at the International Joint Conference on Natural Language Processing. On one, Belinkov is first author, and Glass is senior author, and on the other, Belinkov is a co-author. On both, they’re joined by researchers from the Qatar Computing Research Institute (QCRI), including Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and Stephan Vogel. Belinkov and Glass are sole authors on the paper analyzing speech recognition systems, which Belinkov presented at the Neural Information Processing Symposium last week.

Leveling down

Neural nets are so named because they roughly approximate the structure of the human brain. Typically, they’re arranged into layers, and each layer consists of many simple processing units — nodes — each of which is connected to several nodes in the layers above and below. Data are fed into the lowest layer, whose nodes process it and pass it to the next layer. The connections between layers have different “weights,” which determine how much the output of any one node figures into the calculation performed by the next.

During training, the weights between nodes are constantly readjusted. After the network is trained, its creators can determine the weights of all the connections, but with thousands or even millions of nodes, and even more connections between them, deducing what algorithm those weights encode is nigh impossible.

The MIT and QCRI researchers’ technique consists of taking a trained network and using the output of each of its layers, in response to individual training examples, to train another neural network to perform a particular task. This enables them to determine what task each layer is optimized for.

In the case of the speech recognition network, Belinkov and Glass used individual layers’ outputs to train a system to identify “phones,” distinct phonetic units particular to a spoken language. The “t” sounds in the words “tea,” “tree,” and “but,” for instance, might be classified as separate phones, but a speech recognition system has to transcribe all of them using the letter “t.” And indeed, Belinkov and Glass found that lower levels of the network were better at recognizing phones than higher levels, where, presumably, the distinction is less important.

Similarly, in an earlier paper, presented last summer at the Annual Meeting of the Association for Computational Linguistics, Glass, Belinkov, and their QCRI colleagues showed that the lower levels of a machine-translation network were particularly good at recognizing parts of speech and morphology — features such as tense, number, and conjugation.

Making meaning

But in the new paper, they show that higher levels of the network are better at something called semantic tagging. As Belinkov explains, a part-of-speech tagger will recognize that “herself” is a pronoun, but the meaning of that pronoun — its semantic sense — is very different in the sentences “she bought the book herself” and “she herself bought the book.” A semantic tagger would assign different tags to those two instances of “herself,” just as a machine translation system might find different translations for them in a given target language.

The best-performing machine-translation networks use so-called encoding-decoding models, so the MIT and QCRI researchers’ network uses it as well. In such systems, the input, in the source language, passes through several layers of the network — known as the encoder — to produce a vector, a string of numbers that somehow represent the semantic content of the input. That vector passes through several more layers of the network — the decoder — to yield a translation in the target language.

Although the encoder and decoder are trained together, they can be thought of as separate networks. The researchers discovered that, curiously, the lower layers of the encoder are good at distinguishing morphology, but the higher layers of the decoder are not. So Belinkov and the QCRI researchers retrained the network, scoring its performance according to not only accuracy of translation but also analysis of morphology in the target language. In essence, they forced the decoder to get better at distinguishing morphology.

Using this technique, they retrained the network to translate English into German and found that its accuracy increased by 3 percent. That’s not an overwhelming improvement, but it’s an indication that looking under the hood of neural networks could be more than an academic exercise.

#249: ICRA 2017 Company Showcase, with Li Bingbing, Xianbao Chen, Howard Michel and Lester Teh Chee Onn

Image: ICRA 2017

In this episode, Audrow Nash interviews several companies at the International Conference for Robotics and Automation (ICRA). ICRA is the IEEE Robotics and Automation Society’s biggest conference and one of the leading international forums for robotics researchers to present their work.

Interviews:

Li Bingbing, Software Engineer and Cofounder of Transforma in Singapore, on a robot for painting tall buildings.

Xianbao Chen, Associate Researcher at Shanghai Jiao Tong University, on a hexapod robot for large parts machining.

Howard Michel, Chief Technology Officer of UBTech Education, on a humanoid robot for STEM (Science Technology Engineering, and Math) education for a range of ages.

Lester Teh Chee Onn, Environmental Engineer at Advisian, on a watercraft for environmental monitoring.

 

Links

Robohub Podcast is on Patreon!

Robohub Podcast has launched a campaign on Patreon!

If you don’t know, Robohub Podcast is a biweekly podcast about robotics. Our goal is to explore global robotics through interviews with experts, both in academia and industry. In our interviews,

  • we discuss technical topics (how things work, design decisions),
  • entrepreneurship (lessons learned, business models, ownership),
  • and anything we find interesting and related to robotics (policy, ethics, global trends, international technology initiatives and education, etc.).

We have published nearly 250 episodes and have spoken with many of the most influential people in robotics, such as Rodney Brooks, Dean Kamen, Radhika Nagpal, and Helen Griener.

We would like your support so we can bring you interviews from the leading robotics conferences and laboratories around the world. Our first goal is to send two interviewers to ICRA 2018 in Brisbane, Australia.

If you want to support us, visit our Patreon campaign.

 

FaSTrack: Ensuring safe real-time navigation of dynamic systems


By Sylvia Herbert, David Fridovich-Keil, and Claire Tomlin

The Problem: Fast and Safe Motion Planning

Real time autonomous motion planning and navigation is hard, especially when we care about safety. This becomes even more difficult when we have systems with complicated dynamics, external disturbances (like wind), and a priori unknown environments. Our goal in this work is to “robustify” existing real-time motion planners to guarantee safety during navigation of dynamic systems.

In control theory there are techniques like Hamilton-Jacobi Reachability Analysis that provide rigorous safety guarantees of system behavior, along with an optimal controller to reach a given goal (see Fig. 1). However, in general the computational methods used in HJ Reachability Analysis are only tractable in decomposable and/or low-dimensional systems; this is due to the “curse of dimensionality.” That means for real time planning we can’t process safe trajectories for systems of more than about two dimensions. Since most real-world system models like cars, planes, and quadrotors have more than two dimensions, these methods are usually intractable in real time.

On the other hand, geometric motion planners like rapidly-exploring random trees (RRT) and model-predictive control (MPC) can plan in real time by using simplified models of system dynamics and/or a short planning horizon. Although this allows us to perform real time motion planning, the resulting trajectories may be overly simplified, lead to unavoidable collisions, and may even be dynamically infeasible (see Fig. 1). For example, imagine riding a bike and following the path on the ground traced by a pedestrian. This path leads you straight towards a tree and then takes a 90 degree turn away at the last second. You can’t make such a sharp turn on your bike, and instead you end up crashing into the tree. Classically, roboticists have mitigated this issue by pretending obstacles are slightly larger than they really are during planning. This greatly improves the chances of not crashing, but still doesn’t provide guarantees and may lead to unanticipated collisions.

So how do we combine the speed of fast planning with the safety guarantee of slow planning?

fig1
Figure 1. On the left we have a high-dimensional vehicle moving through an obstacle course to a goal. Computing the optimal safe trajectory is a slow and sometimes intractable task, and replanning is nearly impossible. On the right we simplify our model of the vehicle (in this case assuming it can move in straight lines connected at points). This allows us to plan very quickly, but when we execute the planned trajectory we may find that we cannot actually follow the path exactly, and end up crashing.

The Solution: FaSTrack

FaSTrack: Fast and Safe Tracking, is a tool that essentially “robustifies” fast motion planners like RRT or MPC while maintaining real time performance. FaSTrack allows users to implement a fast motion planner with simplified dynamics while maintaining safety in the form of a precomputed bound on the maximum possible distance between the planner’s state and the actual autonomous system’s state at runtime. We call this distance the tracking error bound. This precomputation also results in an optimal control lookup table which provides the optimal error-feedback controller for the autonomous system to pursue the online planner in real time.

fig2
Figure 2. The idea behind FaSTrack is to plan using the simplified model (blue), but precompute a tracking error bound that captures all potential deviations of the trajectory due to model mismatch and environmental disturbances like wind, and an error-feedback controller to stay within this bound. We can then augment our obstacles by the tracking error bound, which guarantees that our dynamic system (red) remains safe. Augmenting obstacles is not a new idea in the robotics community, but by using our tracking error bound we can take into account system dynamics and disturbances.

Offline Precomputation

We precompute this tracking error bound by viewing the problem as a pursuit-evasion game between a planner and a tracker. The planner uses a simplified model of the true autonomous system that is necessary for real time planning; the tracker uses a more accurate model of the true autonomous system. We assume that the tracker — the true autonomous system — is always pursuing the planner. We want to know what the maximum relative distance (i.e. maximum tracking error) could be in the worst case scenario: when the planner is actively attempting to evade the tracker. If we have an upper limit on this bound then we know the maximum tracking error that can occur at run time.

fig3
Figure 3. Tracking system with complicated model of true system dynamics tracking a planning system that plans with a very simple model.

Because we care about maximum tracking error, we care about maximum relative distance. So to solve this pursuit-evasion game we must first determine the relative dynamics between the two systems by fixing the planner at the origin and determining the dynamics of the tracker relative to the planner. We then specify a cost function as the distance to this origin, i.e. relative distance of tracker to the planner, as seen in Fig. 4. The tracker will try to minimize this cost, and the planner will try to maximize it. While evolving these optimal trajectories over time, we capture the highest cost that occurs over the time period. If the tracker can always eventually catch up to the planner, this cost converges to a fixed cost for all time.

The smallest invariant level set of the converged value function provides determines the tracking error bound, as seen in Fig. 5. Moreover, the gradient of the converged value function can be used to create an optimal error-feedback control policy for the tracker to pursue the planner. We used Ian Mitchell’s Level Set Toolbox and Reachability Analysis to solve this differential game. For a more thorough explanation of the optimization, please see our recent paper from the 2017 IEEE Conference on Decision and Control.

gif4 gif5
Figures 4 & 5: On the left we show the value function initializing at the cost function (distance to origin) and evolving according to the differential game. On the right we should 3D and 2D slices of this value function. Each slice can be thought of as a “candidate tracking error bound.” Over time, some of these bounds become infeasible to stay within. The smallest invariant level set of the converged value function provides us with the tightest tracking error bound that is feasible.

Online real time Planning

In the online phase, we sense obstacles within a given sensing radius and imagine expanding these obstacles by the tracking error bound with a Minkowski sum. Using these padded obstacles, the motion planner decides its next desired state. Based on that relative state between the tracker and planner, the optimal control for the tracker (autonomous system) is determined from the lookup table. The autonomous system executes the optimal control, and the process repeats until the goal has been reached. This means that the motion planner can continue to plan quickly, and by simply augmenting obstacles and using a lookup table for control we can ensure safety!

gif6
Figure 6. MATLAB simulation of a 10D near-hover quadrotor model (blue line) “pursuing” a 3D planning model (green dot) that is using RRT to plan. As new obstacles are discovered (turning red), the RRT plans a new path towards the goal. Based on the relative state between the planner and the autonomous system, the optimal control can be found via look-up table. Even when the RRT planner makes sudden turns, we are guaranteed to stay within the tracking error bound (blue box).

Reducing Conservativeness through Meta-Planning

One consequence of formulating the safe tracking problem as a pursuit-evasion game between the planner and the tracker is that the resulting safe tracking bound is often rather conservative. That is, the tracker can’t guarantee that it will be close to the planner if the planner is always allowed to do the worst possible behavior. One solution is to use multiple planning models, each with its own tracking error bound, simultaneously at planning time. The resulting “meta-plan” is comprised of trajectory segments computed by each planner, each labelled with the appropriate optimal controller to track trajectories generated by that planner. This is illustrated in Fig. 7, where the large blue error bound corresponds to a planner which is allowed to move very quickly and the small red bound corresponds to a planner which moves more slowly.

fig7
Figure 7. By considering two different planners, each with a different tracking error bound, our algorithm is able to find a guaranteed safe “meta-plan” that prefers the less precise but faster-moving blue planner but reverts to the more precise but slower red planner in the vicinity of obstacles. This leads to natural, intuitive behavior that optimally trades off planner conservatism with vehicle maneuvering speed.

Safe Switching

The key to making this work is to ensure that all transitions between planners are safe. This can get a little complicated, but the main idea is that a transition between two planners — call them A and B — is safe if we can guarantee that the invariant set computed for A is contained within that for B. For many pairs of planners this is true, e.g. switching from the blue bound to the red bound in Fig. 7. But often it is not. In general, we need to solve a dynamic game very similar to the original one in FaSTrack, but where we want to know the set of states that we will never leave and from which we can guarantee we end up inside B’s invariant set. Usually, the resulting safe switching bound (SSB) is slightly larger than A’s tracking error bound (TEB), as shown below.

fig8
Figure 8. The safe switching bound for a transition between a planner with a large tracking error bound to one with a small tracking error bound is generally larger than the large tracking error bound, as shown.

Efficient Online Meta-Planning

To do this efficiently in real time, we use a modified version of the classical RRT algorithm. Usually, RRTs work by sampling points in state space and connecting them with line segments to form a tree rooted at the start point. In our case, we replace the line segments with the actual trajectories generated by individual planners. In order to find the shortest route to the goal, we favor planners that can move more quickly, trying them first and only resorting to slower-moving planners if the faster ones fail.

We do have to be careful to ensure safe switching bounds are satisfied, however. This is especially important in cases where the meta-planner decides to transition to a more precise, slower-moving planner, as in the example above. In such cases, we implement a one-step virtual backtracking algorithm in which we make sure the preceding trajectory segment is collision-free using the switching controller.

Implementation

We implemented both FaSTrack and Meta-Planning in C++ / ROS, using low-level motion planners from the Open Motion Planning Library (OMPL). Simulated results are shown below, with (right) and without (left) our optimal controller. As you can see, simply using a linear feedback (LQR) controller (left) provides no guarantees about staying inside the tracking error bound.

fig09 fig10
Figures 9 & 10. (Left) A standard LQR controller is unable to keep the quadrotor within the tracking error bound. (Right) The optimal tracking controller keeps the quadrotor within the tracking bound, even during radical changes in the planned trajectory.

It also works on hardware! We tested on the open-source Crazyflie 2.0 quadrotor platform. As you can see in Fig. 12, we manage to stay inside the tracking bound at all times, even when switching planners.

f11 f12
Figures 11 & 12. (Left) A Crazyflie 2.0 quadrotor being observed by an OptiTrack motion capture system. (Right) Position traces from a hardware test of the meta planning algorithm. As shown, the tracking system stays within the tracking error bound at all times, even during the planner switch that occurs approximately 4.5 seconds after the start.

This article was initially published on the BAIR blog, and appears here with the authors’ permission.

This post is based on the following papers:

  • FaSTrack: a Modular Framework for Fast and Guaranteed Safe Motion Planning
    Sylvia Herbert*, Mo Chen*, SooJean Han, Somil Bansal, Jaime F. Fisac, and Claire J. Tomlin
    Paper, Website

  • Planning, Fast and Slow: A Framework for Adaptive Real-Time Safe Trajectory Planning
    David Fridovich-Keil*, Sylvia Herbert*, Jaime F. Fisac*, Sampada Deglurkar, and Claire J. Tomlin
    Paper, Github (code to appear soon)

We would like to thank our coauthors; developing FaSTrack has been a team effort and we are incredibly fortunate to have a fantastic set of colleagues on this project.

Artificial muscles give soft robots superpowers

Origami-inspired artificial muscles are capable of lifting up to 1,000 times their own weight, simply by applying air or water pressure. Credit: Shuguang Li / Wyss Institute at Harvard University

By Lindsay Brownell

Soft robotics has made leaps and bounds over the last decade as researchers around the world have experimented with different materials and designs to allow once rigid, jerky machines to bend and flex in ways that mimic and can interact more naturally with living organisms. However, increased flexibility and dexterity has a trade-off of reduced strength, as softer materials are generally not as strong or resilient as inflexible ones, which limits their use.

Now, researchers at the Wyss Institute at Harvard University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created origami-inspired artificial muscles that add strength to soft robots, allowing them to lift objects that are up to 1,000 times their own weight using only air or water pressure, giving much-needed strength to soft robots. The study is published this week in Proceedings of the National Academy of Sciences (PNAS).

“We were very surprised by how strong the actuators [aka, “muscles”] were. We expected they’d have a higher maximum functional weight than ordinary soft robots, but we didn’t expect a thousand-fold increase. It’s like giving these robots superpowers,” says Daniela Rus, Ph.D., the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and one of the senior authors of the paper.

“Artificial muscle-like actuators are one of the most important grand challenges in all of engineering,” adds  Rob Wood, Ph.D., corresponding author of the paper and Founding Core Faculty member of the Wyss Institute, who is also the Charles River Professor of Engineering and Applied Sciences at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS). “Now that we have created actuators with properties similar to natural muscle, we can imagine building almost any robot for almost any task.”

Each artificial muscle consists of an inner “skeleton” that can be made of various materials, such as a metal coil or a sheet of plastic folded into a certain pattern, surrounded by air or fluid and sealed inside a plastic or textile bag that serves as the “skin.” A vacuum applied to the inside of the bag initiates the muscle’s movement by causing the skin to collapse onto the skeleton, creating tension that drives the motion. Incredibly, no other power source or human input is required to direct the muscle’s movement; it is determined entirely by the shape and composition of the skeleton.

“One of the key aspects of these muscles is that they’re programmable, in the sense that designing how the skeleton folds defines how the whole structure moves. You essentially get that motion for free, without the need for a control system,” says first author Shuguang Li, Ph.D., a Postdoctoral Fellow at the Wyss Institute and MIT CSAIL. This approach allows the muscles to be very compact and simple, and thus more appropriate for mobile or body-mounted systems that cannot accommodate large or heavy machinery.

Artificial muscle-like actuators are one of the most important grand challenges in all of engineering. Robert Wood

“When creating robots, one always has to ask, ‘Where is the intelligence – is it in the body, or in the brain?’” says Rus. “Incorporating intelligence into the body (via specific folding patterns, in the case of our actuators) has the potential to simplify the algorithms needed to direct the robot to achieve its goal. All these actuators have the same simple on/off switch, which their bodies then translate into a broad range of motions.”

The team constructed dozens of muscles using materials ranging from metal springs to packing foam to sheets of plastic, and experimented with different skeleton shapes to create muscles that can contract down to 10% of their original size, lift a delicate flower off the ground, and twist into a coil, all simply by sucking the air out of them.

The structural geometry of artificial muscle skeleton determines the muscle’s motion. Credit: Shuguang Li / Wyss Institute at Harvard University

Not only can the artificial muscles move in many ways, they do so with impressive resilience. They can generate about six times more force per unit area than mammalian skeletal muscle can, and are also incredibly lightweight; a 2.6-gram muscle can lift a 3-kilogram object, which is the equivalent of a mallard duck lifting a car. Additionally, a single muscle can be constructed within ten minutes using materials that cost less than $1, making them cheap and easy to test and iterate.

These muscles can be powered by a vacuum, a feature that makes them safer than most of the other artificial muscles currently being tested. “A lot of the applications of soft robots are human-centric, so of course it’s important to think about safety,” says Daniel Vogt, M.S., co-author of the paper and Research Engineer at the Wyss Institute. “Vacuum-based muscles have a lower risk of rupture, failure, and damage, and they don’t expand when they’re operating, so you can integrate them into closer-fitting robots on the human body.”

“In addition to their muscle-like properties, these soft actuators are highly scalable. We have built them at sizes ranging from a few millimeters up to a meter, and their performance holds up across the board,” Wood says. This feature means that the muscles can be used in numerous applications at multiple scales, such as miniature surgical devices, wearable robotic exoskeletons, transformable architecture, deep-sea manipulators for research or construction, and large deployable structures for space exploration.

The team was even able to construct the muscles out of the water-soluble polymer PVA, which opens the possibility of robots that can perform tasks in natural settings with minimal environmental impact, as well as ingestible robots that move to the proper place in the body and then dissolve to release a drug. “The possibilities really are limitless. But the very next thing I would like to build with these muscles is an elephant robot with a trunk that can manipulate the world in ways that are as flexible and powerful as you see in real elephants,” Rus says.

“The actuators developed through this collaboration between the Wood laboratory at Harvard and Rus group at MIT exemplify the Wyss’ approach of taking inspiration from nature without being limited by its conventions, which can result in systems that not only imitate nature, but surpass it,” says the Wyss Institute’s Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at SEAS.

This research was funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Wyss Institute for Biologically Inspired Engineering.

The stories we tell about technology: AI Narratives

By Susannah Odell and Natasha McCarthy

Technology narratives

The nature, promise and risks of new technologies enter into our shared thinking through narrative – explicit or implicit stories about the technologies and their place in our lives. These narratives can determine what is salient about the technologies, influencing how they are represented in media, culture and everyday discussion. The narratives can influence the dynamics of concern and aspiration across society; the ways and the contexts in which different groups and individuals become aware of and respond to mainstream, new and emerging technologies. The narratives available at a particular point in time, and who tells them, can affect the course of technology development and uptake in subtle ways.

Whilst stories about artificial intelligence have been around for centuries, the way we think about AI is evolving. The Royal Society and Leverhulme Centre for the Future of Intelligence are exploring the ways that narratives about AI might be influencing the development of the technology.

The longevity and influence of narratives

Exploring different technology areas can show how explicit framings of a technology – how it is presented to the wider world – can be influential and long-lasting in this respect. For example in nuclear energy, Lewis Strauss, the Chairman of the US Atomic Energy Commission in 1954, stated that nuclear power would create energy “too cheap to meter”. What turned out to be over-promising for this technology shaped the arguments of sceptics, and this image continues to be used by those who critique the technology. Early scientific optimism can create inadvertent and unexpected milestones that may – rightly or wrongly – influence how technology is perceived when those milestones are not met. Such framings can be hard to shake off and can dominate more complex and subtle considerations.

Diverse visions: aspirations and concerns

Instead of promoting single framings for technologies and their applications, sowing the seeds early on for multiple voices to be heard can promote diverse narratives and ensure that the technology develops in line with genuine societal needs. A greater diversity of both actors in the development of the technology and diversity in the stories we tell about AI may elucidate new uses and governance needs. This requires extensive and continued public dialogue; the Royal Society’s public dialogue on machine learning recently explored how these views can be context specific.

Encouraging credible, trustworthy and independent communicators who do not stand to benefit personally from the technology can create more realistic narratives around new technologies and science, especially when combined with greater scientific transparency and self-correction. Comprehensive scenario planning can build trustworthy narratives, helping to analyse possible worst case accident scenarios and substantially reduce future risk.

Widening the narratives on AI

Diversifying today’s stories about AI to ensure that they are reflective of the current technological development, will give us better ideas of how AI can be used to transform our lives. Dominant narratives focus on anthropomorphised AI, but the reality of AI includes systems that are distributed, embedded in complex systems and can be found in varied applications such as helping doctors detect breast cancer or increasing the responsiveness of emergency services to flooding incidents. Adding to existing narratives with stories from underrepresented voices can also help us, as citizens, policy-makers and scientists, to imagine new opportunities and expand our assessment of how AI should be regarded, regulated and harnessed for the best possible economic and societal outcomes.

This is why the Leverhulme Centre for the Future of Intelligence and the Royal Society are exploring how visions and narratives are shaping perceptions, the development of intelligent technology and trust in its use. Find more information.

So where are the jobs?

Dan Burstein, reporter, novelist and successful venture capitalist, declared Wednesday night at RobotLab‘s winter forum on Autonomous Transportation & SmartCities that within one hundred years the majority of jobs in the USA (and the world) could disappear, transferring the mantle of work from humans to machines. Burstein cautioned the audience that unless governments address the threat of millions of unemployable humans with a wider safety net, democracy could fail. The wisdom of one of the world’s most successful venture investors did not fall on deaf ears.

In their book, Only Humans Need ApplyThomas Davenport and Julia Kirby also warn that that humans are too easily ceding their future to machines. “Many knowledge workers are fearful. We should be concerned, given the potential for these unprecedented tools to make us redundant. But we should not feel helpless in the midst of the large-scale change unfolding around us,” states Davenport and Kirby. The premise of their book is not to deny the disruption by automation, but to empower its readers with the knowledge of where jobs are going to be created in the new economy. The authors suggest that robots should be looked at as augmenting humans, not replacing them. “Look for ways to help humans perform their most human and most valuable work better,” says Davenport and Kirby. The book suggests that in order for human society to survive long-term a new social contract has to be drawn up between employer and employee. The authors optimistically predict that to “expect corporations’ efforts to keep human workers employable will become part of their ‘social license to operate.”

Screen Shot 2017-12-02 at 8.29.59 PM.png

In every industrial movement since the invention of the power loom and cotton gin there have been great transfers of wealth and jobs. History is riddled with the fear of the unknown that is countered by the unwavering human spirit of invention. As societies evolve, pressured by the adoption of technology, it will be the progressive thinkers embracing change who will lead movements of people to new opportunities.

Burstein’s remarks were very timely, as this past week, McKinsey & Company released a new report entitled, Jobs lost, jobs gained: Workforce transitions in a time of automation. McKinsey’s analysis took a global view of 46 countries that comprise of 90% of the world’s Gross Domestic Product, and then focused in particular on six industrial countries with varying income levels: China, Germany, India, Japan, Mexico, and the United States. The introduction explains, “For each, we modeled the potential net employment changes for more than 800 occupations, based on different scenarios for the pace of automation adoption and for future labor demand.” Ultimately concluding where the new jobs will be in 2030.

A prominent factor in the McKinsey report is that over the next thirteen years global consumption is anticipated grow by $23 trillion. Based upon this trajectory, they estimate that between 300 to 450 million new jobs would be generated worldwide, especially in the Far East. In addition, dramatic shifts in demographics and the environment will lead to the expansion of jobs in healthcare, IT consulting, construction, infrastructure, clean energy, and domestic services. The report estimates that automation will displace between 400 and 800 million people by 2030, forcing 75 to 375 million working men and women to switch professions and learn new skills. In order to survive, populations will have to embrace the concept of lifelong learning. Based upon the math, more than half of the displaced will be unemployable in their current professions.

Screen Shot 2017-12-02 at 8.54.40 PM.png

According to McKinsey a bright spot for employment could be the 80 to 200 million new jobs created by modernizing aging municipalities into “Smart Cities.” McKinsey’s conclusions were echoed by Rhonda Binder and Mitchell Kominsky of Venture Smarter in their RobotLab presentation last week. Binder presented her experiences with turning around the Jamaica Business Improvement District of Queens, New York by implementing her “Three T’s Strategy – Tourism, Transportation and Technology.” Binder stated that cities offer the perfect laboratory for autonomous systems and sensor-based technologies to improve the quality of life of residents. To support these endeavors a hiring surge of urban technologists, architects, civil engineers, and construction workers across all trades could ensue in the next decade.

This is further validated by Google’s subsidiary Sidewalk Labs recent partnership with Toronto, Canada to redevelop, and digitize, 800 acres of the city’s waterfront property. Dan Doctoroff, Chief Executive Officer of Sidewalk Labs, explained that the goal of the partnership is to prove what is possible by building a digital city within an existing city to test out autonomous transportation, new communication access, healthcare delivery solutions and a variety of urban planning technologies. Doctoroff’s sentiment was endorsed by Canadian Prime Minister Justin Trudeau who said that “Sidewalk Toronto will transform Quayside [the waterfront] into a thriving hub for innovation and a community for tens of thousands of people to live, work, and play.” The access to technology not only offers the ability to improve the quality of living within the city, but fosters an influx of sustainable jobs for decades.

In addition to updating crippling infrastructure, aging humans will be driving an increase in global healthcare services, particularly related to demand for in-home caregivers and aid workers. According to McKinsey, there will be over 300 million people globally over 65 years old by 2030, leading to 50 to 80 million new jobs. Geriatric medicine is leading new research in artificial intelligence and robotics for aging-in-place populations demanding more doctors, nurses, medical technicians, and personal aides.

Screen Shot 2017-12-02 at 11.27.47 PM.png

Aging will not be the only challenge facing the planet. Global warming could lead to an explosion of jobs in turning back the clock on climate change. The rush to develop advances in renewable energy worldwide is already generating billions of dollars of new investment and demand for high skill and manual labor. As an example, the American solar industry employed a record-high 260,077 workers in late 2016, a growth of at least 20% over the past four years. In New York alone, the state experienced a 7% uptick in 2016 of close to 150,000 clean-energy jobs. McKinsey stated that by 2030, tens of millions of new professions could be created in the coming years in developing, manufacturing and installing energy-efficient innovations.

McKinsey also estimates that automation itself will bring new employment  with corporate-technology spending hitting record highs. While the number of jobs added to support the deployment of machines is smaller in number than the other industries above, these occupations offer higher wages. Robots could potentially create 20 to 50 million new “grey collar”  professions globally. In addition, re-training workers to these professions could lead to a new workforce of 100 million educators.

Screen Shot 2017-12-03 at 12.14.35 AM.png

The report does not shirk from the fact that a major disruption is on the horizon for labor. In fact, the authors hope that by researching pockets of positive growth, humans will not be helpless victims. As Devin Fidler of Institute for the Future suggests, “As basic automation and machine learning move toward becoming commodities, uniquely human skills will become more valuable. There will be an increasing economic incentive to develop mass training that better unlocks this value.”

A hundred years ago the world experienced a dramatic shift from agrarian lifestyle to manufacturing. Since then, there have been revolutions in mass transportation and communications. No one could have predicated the massive transfer of jobs from the fields to the urban factories in the beginning of the nineteenth century. Likewise, it is difficult to know what the next hundred years have in store for human, and robot, kind.

Page 407 of 428
1 405 406 407 408 409 428