News

Page 424 of 433
1 422 423 424 425 426 433

June 2017 fundings, acquisitions, and IPOs

June, 2017 saw two robotics-related companies get $50 million each and 17 others raised $248 million for a monthly total of $348 million. Acquisitions also continued to be substantial with SoftBank's acquisition of Google's robotic properties Boston Dynamics and Schaft plus two others acquisitions.

Fundings

  • Drive.ai raised $50 million in a Series B funding round, led by New Enterprise Associates, Inc. (NEA) with participation from GGV Capital and existing investors including Northern Light Venture Capital. Andrew Ng who led AI projects at Baidu and Google (and is husband to Drive.ai’s co-founder and president Carol Reiley) joined the board of directors and said: 

    “The cutting-edge of autonomous driving has shifted squarely to deep learning. Even traditional autonomous driving teams have 'sprinkled on' some deep learning, but Drive.ai is at the forefront of leveraging deep learning to build a truly modern autonomous driving software stack.

  • Aera Technology, renamed from FusionOps, a Silicon Valley software and AI provider, raised $50 million from New Enterprise Associates. Aera seems to be the first RPA to actuate in the physical world. Merck uses Aera to predict demand, determine product routing and interact with warehouse management systems to enact what’s needed.

    “The leap from transactional automation to cognitive automation is imminent and it will forever transform the way we work,” says Frederic Laluyaux, President and CEO of Aera. “At Aera, we deliver the technology that enables the Self-Driving Enterprise: a cognitive operating system that connects you with your business and autonomously orchestrates your operations.”

  • Swift Navigation, the San Francisco tech firm building centimeter-accurate GPS technology to power a world of autonomous vehicles, raised $34 million in a Series B financing round led by New Enterprise Associates (NEA), with participation from existing investors Eclipse and First Round Capital. Swift provides solutions to over 2,000 customers-including autonomous vehicles, precision agriculture, unmanned aerial vehicles (UAVs), robotics, maritime, transportation/logistics and outdoor industrial applications. By moving GPS positioning from custom hardware to a flexible software-based receiver, Swift Navigation delivers Real Time Kinematics (RTK) GPS (100 times more accurate than traditional GPS) at a fraction of the cost ($2k) of alternative RTK systems.
  • AeroFarms raised over $34 million of a $40 million Series D. The New Jersey-based indoor vertical farming startup has raised over $130 million since 2014 and now has 9 operating indoor farms. AeroFarms grows leafy greens using aeroponics –- growing them in a misting environment without soil using LED lights, and growth algorithms. The round brings AeroFarms’ total fundraising to over $130 million since 2014 including a $40 million note from Goldman Sachs and Prudential.
  • Seven Dreamers Labs, a Tokyo startup commercializing the Laundroid robot, raised $22.8 million KKR’s co-founders Henry Kravis and George Roberts, Chinese conglomerate Fosun International, and others. Laundroid is being developed with Panasonic and Daiwa House.
  • Bowery Farming, which raised $7.5 million earlier this year, raised an additional $20 million from General Catalyst, GGV Capital and GV (formerly Google Ventures). Bowery’s first indoor farm in Kearny, NJ, uses proprietary computer software, LED lighting and robotics to grow leafy greens without pesticides and with 95% less water than traditional agriculture.
  • Drone Racing League raised $20 million in a Series B investment round led by Sky, Liberty Media and Lux Capital, and new investors Allianz and World Wrestling Entertainment, plus existing investors Hearst Ventures, RSE Ventures, Lerer Hippeau Ventures, and Courtside Ventures.
  • Momentum Machines, the SF-based startup developing a hamburger-making robot, raised $18.4 million in an equity offering of $21.8 million, from existing investors Lemnos Labs, GV, K5 Ventures and Khosla Ventures. The company has been working on its first retail location since at least June of last year. There is still no scheduled opening date for the flagship, though it's expected to be located in San Francisco's South of Market neighborhood.
  • AEye, a startup developing a solid state LiDAR and other vision systems for self-driving cars, raised $16 million in a Series A round led by  Kleiner Perkins Caufield & Byers, Airbus Ventures, Intel Capital, Tyche Partners and others.

    Said Luis Dussan, CEO of AEye. “The biggest bottleneck to the rollout of robotic vision solutions has been the industry’s inability to deliver a world-class perception layer. Quick, accurate, intelligent interpretation of the environment that leverages and extends the human experience is the Holy Grail, and that’s exactly what AEye intends to deliver.”

  • Snips,  an NYC voice recognition AI startup, raised $13 million in a Series A round led by MAIF Avenir with PSIM Fund managed by Bpifrance, as well as previous investor Eniac Ventures and K-Fund 1 and Korelya Capital, joining the round.  Snip makes an on-device system that parses and understands better than Amazon's Alexa.
  • Misty Robotics, a spin-out from Orbotix/Sphero, raised $11.5 million in Series A funding from Venrock, Foundry Group and others. Ian Bernstein, former Sphero co-founder and CTO, will be taking the role of Head of Product and is joined by five other autonomous robotics division team members. Misty Robotics will use its new capital to build out the team and accelerate product development. Sphero and Misty Robotics will have a close partnership and have signed co-marketing and co-development agreements.
  • Superflex, a spin-off from SRI, has raised $10.2 million in equity financing from 10 unnamed investors. Superflex is developing a powered suit designed for individuals experiencing mobility difficulties and working in challenging environments to support the wearer’s torso, hips and legs.
  • Nongtian Guanjia (FarmFriend), a Chinese drone/ag industry software startup, raised $7.36 million led by Gobi Partners and existing investors GGV Capital, Shunwei Capital, the Zhen Fund and Yunqi Partners.
  • Carmera, a NYC-based auto tech startup, unstealthed this week with $6.4M in funding led by Matrix Partners. The two-year-old company has been quietly collecting data for its 3D mapping solution, partnering with delivery fleets to install its sensor and data collection platform.
  • Cognata, an Israeli deep learning simulation startup, raised $5 million from Emerge, Maniv Mobility, and Airbus Ventures. Cognata recently launched a self-driving vehicle road-testing simulation package

    “Every autonomous vehicle developer faces the same challenge—it is really hard to generate the numerous edge cases and the wide variety of real-world environments. Our simulation platform rapidly pumps out large volumes of rich training data to fuel these algorithms,” said Cognata’s Danny Atsmon

  • SoftWear Automation, the GA Tech and DARPA sponsored startup developing sewing robots for apparel manufacturing, raised $4.5 million in a Series A round from CTW Venture Partners.
  • Knightscope, a startup developing robotic security technologies, raised $3 million from Konica Minolta. The capital is to be invested in Knightscope’s current Reg A+ “mini-IPO” offering of Series M Preferred Stock.
  • Multi Tower Co, a Danish medical device startup, raised around $1.12 million through a network of private and public investors most notable of which were Syddansk Innovation, Rikkesege Invest, M. Blæsbjerg Holding and Dahl Gruppen Holding. The Multi Tower Robot used to lift and move hospital patients, is developed through Blue Ocean Robotics’ partnership program, RoBi-X, in a public-private partnership (PPP) between University Hospital Køge, Multi Tower Company and Blue Ocean Robotics.
  • Optimus Ride, a MIT spinoff developing self-driving tech, raised $1.1 million in financing from an undisclosed investor.

Acquisitions

  • SoftBank acquired Boston Dynamics and Schaft from Google Alphabet for an undisclosed amount.
    • Boston Dynamics, a DARPA and DoD-funded 25 year old company, designs two and four-legged robots for the military. Videos of BD’s robots WildCat, Big Dog, Cheetah and most recently Handle, continue to be YouTube hits. Handle is a two-wheeled, four-legged hybrid robot that can stand, walk, run and roll at up to 9 MPH.
    • Schaft, a Japanese participant in the DARPA Robotics Challenge, recently unveiled an updated version of a two-legged robot that climbed stairs, can carry 125 pounds of payload, move in tight spaces and keep its balance throughout. 
  • IPG Photonics, a laser component manufacturer/integrator of welding and laser-cutting systems, including robotic ones, acquired Innovative Laser Technologies, a Minnesota laser systems maker, for $40 million. 
  • Motivo Engineering, an engineering product developer, has acquired Robodondo, an ag tech integrator focused on food processing, for an undisclosed amount.

IPOs

  • None. Nada. Zip.

Peering into neural networks

Neural networks learn to perform computational tasks by analyzing large sets of training data. But once they’ve been trained, even their designers rarely have any idea what data elements they’re processing.
Image: Christine Daniloff/MIT

By Larry Hardesty

Neural networks, which learn to perform computational tasks by analyzing large sets of training data, are responsible for today’s best-performing artificial intelligence systems, from speech recognition systems, to automatic translators, to self-driving cars.

But neural nets are black boxes. Once they’ve been trained, even their designers rarely have any idea what they’re doing — what data elements they’re processing and how.

Two years ago, a team of computer-vision researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) described a method for peering into the black box of a neural net trained to identify visual scenes. The method provided some interesting insights, but it required data to be sent to human reviewers recruited through Amazon’s Mechanical Turk crowdsourcing service.

At this year’s Computer Vision and Pattern Recognition conference, CSAIL researchers will present a fully automated version of the same system. Where the previous paper reported the analysis of one type of neural network trained to perform one task, the new paper reports the analysis of four types of neural networks trained to perform more than 20 tasks, including recognizing scenes and objects, colorizing grey images, and solving puzzles. Some of the new networks are so large that analyzing any one of them would have been cost-prohibitive under the old method.

The researchers also conducted several sets of experiments on their networks that not only shed light on the nature of several computer-vision and computational-photography algorithms, but could also provide some evidence about the organization of the human brain.

Neural networks are so called because they loosely resemble the human nervous system, with large numbers of fairly simple but densely connected information-processing “nodes.” Like neurons, a neural net’s nodes receive information signals from their neighbors and then either “fire” — emitting their own signals — or don’t. And as with neurons, the strength of a node’s firing response can vary.

In both the new paper and the earlier one, the MIT researchers doctored neural networks trained to perform computer vision tasks so that they disclosed the strength with which individual nodes fired in response to different input images. Then they selected the 10 input images that provoked the strongest response from each node.

In the earlier paper, the researchers sent the images to workers recruited through Mechanical Turk, who were asked to identify what the images had in common. In the new paper, they use a computer system instead.

“We catalogued 1,100 visual concepts — things like the color green, or a swirly texture, or wood material, or a human face, or a bicycle wheel, or a snowy mountaintop,” says David Bau, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We drew on several data sets that other people had developed, and merged them into a broadly and densely labeled data set of visual concepts. It’s got many, many labels, and for each label we know which pixels in which image correspond to that label.”

The paper’s other authors are Bolei Zhou, co-first author and fellow graduate student; Antonio Torralba, MIT professor of electrical engineering and computer science; Aude Oliva, CSAIL principal research scientist; and Aditya Khosla, who earned his PhD as a member of Torralba’s group and is now the chief technology officer of the medical-computing company PathAI.

The researchers also knew which pixels of which images corresponded to a given network node’s strongest responses. Today’s neural nets are organized into layers. Data are fed into the lowest layer, which processes them and passes them to the next layer, and so on. With visual data, the input images are broken into small chunks, and each chunk is fed to a separate input node.

For every strong response from a high-level node in one of their networks, the researchers could trace back the firing patterns that led to it, and thus identify the specific image pixels it was responding to. Because their system could frequently identify labels that corresponded to the precise pixel clusters that provoked a strong response from a given node, it could characterize the node’s behavior with great specificity.

The researchers organized the visual concepts in their database into a hierarchy. Each level of the hierarchy incorporates concepts from the level below, beginning with colors and working upward through textures, materials, parts, objects, and scenes. Typically, lower layers of a neural network would fire in response to simpler visual properties — such as colors and textures — and higher layers would fire in response to more complex properties.

But the hierarchy also allowed the researchers to quantify the emphasis that networks trained to perform different tasks placed on different visual properties. For instance, a network trained to colorize black-and-white images devoted a large majority of its nodes to recognizing textures. Another network, when trained to track objects across several frames of video, devoted a higher percentage of its nodes to scene recognition than it did when trained to recognize scenes; in that case, many of its nodes were in fact dedicated to object detection.

One of the researchers’ experiments could conceivably shed light on a vexed question in neuroscience. Research involving human subjects with electrodes implanted in their brains to control severe neurological disorders has seemed to suggest that individual neurons in the brain fire in response to specific visual stimuli. This hypothesis, originally called the grandmother-neuron hypothesis, is more familiar to a recent generation of neuroscientists as the Jennifer-Aniston-neuron hypothesis, after the discovery that several neurological patients had neurons that appeared to respond only to depictions of particular Hollywood celebrities.

Many neuroscientists dispute this interpretation. They argue that shifting constellations of neurons, rather than individual neurons, anchor sensory discriminations in the brain. Thus, the so-called Jennifer Aniston neuron is merely one of many neurons that collectively fire in response to images of Jennifer Aniston. And it’s probably part of many other constellations that fire in response to stimuli that haven’t been tested yet.

Because their new analytic technique is fully automated, the MIT researchers were able to test whether something similar takes place in a neural network trained to recognize visual scenes. In addition to identifying individual network nodes that were tuned to particular visual concepts, they also considered randomly selected combinations of nodes. Combinations of nodes, however, picked out far fewer visual concepts than individual nodes did — roughly 80 percent fewer.

“To my eye, this is suggesting that neural networks are actually trying to approximate getting a grandmother neuron,” Bau says. “They’re not trying to just smear the idea of grandmother all over the place. They’re trying to assign it to a neuron. It’s this interesting hint of this structure that most people don’t believe is that simple.”

The Drone Center’s Weekly Roundup: 7/3/17

The OR-3 autonomous security robot will begin patrolling parts of Dubai. Credit: Otsaw Digital

At the Center for the Study of the Drone

In a podcast at The Drone Radio Show, Arthur Holland Michel discusses the Center for the Study of the Drone’s recent research on local drone regulations, public safety drones, and legal incidents involving unmanned aircraft.

In a series of podcasts at the Center for a New American Security, Dan Gettinger discusses trends in drone proliferation and the U.S. policy on drone exports.

News

The U.S. Court of Appeals for the D.C. Circuit dismissed a lawsuit over the death of several civilians from a U.S. drone strike in Yemen, concurring with the decision of a lower court. In the decision, Judge Janice Rogers Brown argued that Congress had nevertheless failed in its oversight of the U.S. military. (The Hill)

Commentary, Analysis, and Art

At the Bulletin of Atomic Scientist, Michael Horowitz argues that the Missile Technology Control Regime is poorly suited to manage international drone proliferation.

At War on the Rocks, Joe Chapa argues that debates over the ethics of drone strikes are often clouded by misconceptions.

At Phys.org, Julien Girault writes that Chinese drone maker DJI is looking at how its consumer drones can be applied to farming.

At IHS Jane’s Navy International, Anika Torruella looks at how the U.S. Navy is investing in unmanned and autonomous technologies.

Also at IHS Jane’s, Anika Torruella writes that the U.S. Navy does not plan to include large unmanned undersea vehicles as part of its 355-ship fleet goal.

At Defense One, Brett Velicovich looks at how consumer drones can easily be altered to carry a weapons payload.

At Aviation Week, James Drew considers how U.S. drone firm General Atomics is working to develop the next generation of drones.

At Popular Science, Kelsey D. Atherton looks at how legislation in California could prohibit drone-on-drone cage fights.

At the Charlotte Observer, Robin Hayes argues that Congress should not grant Amazon blanket permission to fly delivery drones.

At the MIT Technology Review, Bruce Y. Lee argues that though medicine-carrying drones may be expensive, they will save lives.

In a speech at the SMi Future Armoured Vehicles Weapon Systems conference in London, U.S. Marine Corps Colonel Jim Jenkins discussed the service’s desire to use small, cheap autonomous drones on the battlefield. (IHS Jane’s 360)

At the Conversation, Andres Guadamuz considers whether the works of robot artists should be protected by copyright.

Know Your Drone

A team at the MIT Computer Science and Artificial Intelligence Laboratory has built a multirotor drone that is also capable of driving around on wheels like a ground robot. (CNET)

Facebook conducted a test flight of its Aquila solar-powered Internet drone. (Fortune)

Meanwhile, China Aerospace Science and Technology Corporation conducted a 15-hour test flight of its Cai Hong solar-powered drone at an altitude of over 65,000 feet. (IHS Jane’s 360)

The Defense Advanced Research Projects Agency successfully tested autonomous quadcopters that were able to navigate a complex obstacle course without GPS. (Press Release)

French firm ECA group is modifying its IT180 helicopter drone for naval operations. (Press Release)

Italian firm Leonardo plans to debut its SD-150 rotary-wing military drone in the third quarter of 2017. (IHS Jane’s 360)

Researchers at MIT are developing a drone capable of remaining airborne for up to five days at a time. (TechCrunch)

Drones at Work

The government of Malawi and  humanitarian agency Unicef have launched an air corridor to test drones for emergency response and medical deliveries. (BBC)

French police have begun using drones to search for migrants crossing the border with Italy. (The Telegraph)

Researchers from Missouri University have been testing drones to conduct inspections of water towers. (Missourian)

An Australian drug syndicate reportedly used aerial drones to run counter-surveillance on law enforcement officers during a failed bid to import cocaine into Melbourne. (BBC)

In a simulated exercise in New Jersey, first responders used a drone to provide temporary cell coverage to teams on the ground. (AUVSI)

The International Olympic Committee has announced that chipmaker Intel will provide drones for light shows at future Olympic games. (CNN)

The U.S. Air Force has performed its first combat mission with the new Block 5 variant of the MQ-9 Reaper. (UPI)

The police department in West Seneca, New York has acquired a drone. (WKBW)

Chinese logistics firm SF Express has obtained approval from the Chinese government to operate delivery drones over five towns in Eastern China. (GBTimes)

Portugal’s Air Traffic Accident Prevention and Investigation Office is leading an investigation into a number of close encounters between drones and manned aircraft in the country’s airspace. (AIN Online)

The U.S. Federal Aviation Administration and app company AirMap are developing a system that will automate low-altitude drone operation authorizations. (Drone360)

Police in Arizona arrested a man for allegedly flying a drone over a wildfire. (Associated Press)

Dubai’s police will deploy the Otsaw Digital O-R3, an autonomous security robot equipped with facial recognition software and a built-in drone, to patrol difficult-to-reach areas. (Washington Post)

The University of Southampton writes that Boaty McBoatface, an unmanned undersea vehicle, captured “unprecedented data” during its voyage to the Orkney Passage.

Five flights were diverted from Gatwick Airport when a drone was spotted flying nearby. (BBC)

Industry Intel

The U.S. Special Operations Command awarded Arcturus UAV a contract to compete in the selection of the Mid-Endurance Unmanned Aircraft System. AAI Corp. and Insitu are also competing. (DoD)

The U.S. Air Force awarded General Atomics Aeronautical a $27.6 million contract for the MQ-9 Gen 4 Predator primary datalink. (DoD)

The U.S. Army awarded AAI Corp. a $12 million contract modification for the Shadow v2 release 6 system baseline update. (DoD)

The U.S. Army awarded DBISP a $73,392 contract for 150 quadrotor drones made by DJI and other manufacturers. (FBO)

The Department of the Interior awarded NAYINTY3 a $7,742 contract for the Agisoft Photo Scan, computer software designed to process images from drones. (FBO)

The Federal Aviation Administration awarded Computer Sciences Corporation a $200,000 contract for work on drone registration. (USASpending)

The U.S. Navy awarded Hensel Phelps a $36 million contract to build a hangar for the MQ-4C Triton surveillance drone at Naval Station Mayport in Florida. (First Coast News)

The U.S. Navy awarded Kratos Defense & Security Solutions a $35 million contract for the BQM-177A target drones. (Military.com)

NATO awarded Leonardo a contract for logistic and support services for the Alliance Ground Surveillance system. (Shephard Media)

Clobotics, a Shanghai-based startup that develops artificial intelligence-equipped drones for infrastructure inspection, announced that it has raised $5 million in seed funding. (GeekWire)

AeroVironment’s stock fell despite a $124.4 million surge in revenue in its fiscal fourth quarter. (Motley Fool)

Ford is creating the Robotics and Artificial Intelligence Research team to study emerging technologies. (Ford Motor Company)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Building with robots and 3D printers: Construction of the DFAB HOUSE up and running

At the Empa and Eawag NEST building in Dübendorf, eight ETH Zurich professors as part of the Swiss National Centre of Competence in Research (NCCR) Digital Fabrication are collaborating with business partners to build the three-storey DFAB HOUSE. It is the first building in the world to be designed, planned and built using predominantly digital processes.

Robots that build walls and 3D printers that print entire formworks for ceiling slabs – digital fabrication in architecture has developed rapidly in recent years. As part of the National Centre of Competence in Research (NCCR) Digital Fabrication, architects, robotics specialists, material scientists, structural engineers and sustainability experts from ETH Zurich have teamed up with business partners to put several new digital building technologies from the laboratory into practice. Construction is taking place at NEST, the modular research and innovation building that Empa and Eawag built on their campus in Dübendorf to test new building and energy technologies under real conditions. NEST offers a central support structure with three open platforms, where individual construction projects – known as innovation units – can be installed. Construction recently began on the DFAB HOUSE.

Digitally Designed, Planned and Built
The DFAB HOUSE is distinctive in that it was not only digitally designed and planned, but is also built using predominantly digital processes. With this pilot project, the ETH professors want to examine how digital technology can make construction more sustainable and efficient, and increase the design potential. The individual components were digitally coordinated based on the design and are manufactured directly from this data. The conventional planning phase is no longer needed. As of summer 2018, the three-storey building, with a floor space of 200 m2, will serve as a residential and working space for Empa and Eawag guest researchers and partners of NEST.

Four New Building Methods Put to the Test
At the DFAB HOUSE, four construction methods are for the first time being transferred from research to architectural applications. Construction work began with the Mesh Mould technology, which received the Swiss Technology Award at the end of 2016. The result will be a double-curved load-bearing concrete wall that will shape the architecture of the open-plan living and working area on the ground floor. A “Smart Slab” will then be installed – a statistically optimised and functionally integrated ceiling slab, for which the researchers used a large-format 3D sand printer to manufacture the formwork.

Smart Dynamic Casting technology is being used for the façade on the ground floor: the automated robotic slip-forming process can produce tailor-made concrete façade posts. The two upper floors, with individual rooms, are being prefabricated at ETH Zurich’s Robotic Fabrication Laboratory using spatial timber assemblies; cooperating robots will assemble the timber construction elements.

More Information in ETH Zurich Press Release and on Project Website
Detailed information about the building process, quotes as well as image and video material can be found in the extended press release by ETH Zurich. In addition, a project website for the DFAB HOUSE is currently in development and will soon be available at the following link: www.dfabhouse.ch. Until then, Empa’s website offers information about the project: https://www.empa.ch/web/nest/digital-fabrication

NCCR Investigators Involved with the DFAB HOUSE:
Prof. Matthias Kohler, Chair of Architecture and Digital Fabrication
Prof. Fabio Gramazio, Chair of Architecture and Digital Fabrication
Prof. Benjamin Dillenburger, Chair for Digital Building Technologies
Prof. Joseph Schwartz, Chair of Structural Design
Prof. Robert Flatt, Institute for Building Materials
Prof. Walter Kaufmann, Institute of Structural Engineering
Prof. Guillaume Habert, Institute of Construction & Infrastructure Management
Prof. Jonas Buchli, Institute of Robotics and Intelligent Sys

Image credits: TBD
Image caption: TBD

tems

Can we test robocars the way we tested regular cars?

I’ve written a few times that perhaps the biggest unsolved problem in robocars is how to know we have made them safe enough. While most people think of that in terms of government certification, the truth is that the teams building the cars are very focused on this, and know more about it than any regulator, but they still don’t know enough. The challenge is going to be convincing your board of directors that the car is safe enough to release, for if it is not, it could ruin the company that releases it, at least if it’s a big company with a reputation.

We don’t even have a good definition of what “safe enough” is though most people are roughly taking that as “a safety record superior to the average human.” Some think it should be much more, few think it should be less. Tesla, now with the backing of the NTSB, has noted that their autopilot system — combined with a mix of mostly attentive but some inattentive humans, may have a record superior to the average human, for example, even though with the inattentive humans it is worse.

Last week I attended a conference in Stuttgart devoted to robocar safety testing, part of a larger auto show including an auto testing show. It was interesting to see the main auto testing show — scores of expensive and specialized machines and tools that subject cars to wear and tear, slamming doors thousands of times, baking the surfaces, rattling and vibrating everything. And testing the electronics, too.

In Europe, the focus of testing is very strongly on making sure you are compliant with standards and regulations. That’s true in the USA but not quite as much. It was in Europe some time ago that I learned the word “homologation” which names this process.


There is a lot to be learned from the previous regimes of testing. They have built a lot of tools and learned techniques. But robocars are different beasts, and will fail in different ways. They will definitely not fail the way human drivers do, where usually small things are always going wrong, and an accident happens when 2 or 3 things go wrong at once. The conference included a lot of people working on simulation, which I have been promoting for many years. The one good thing in the NHTSA regulations — the open public database of all incidents — may vanish in the new rules, and it would have made for a great simulator. The companies making the simulators (and the academic world) would have put every incident into a shared simulator so every new car could test itself in every known problem situation.

Still, we will see lots of simulators full of scenarios, and also ways to parameterize them. That means that instead of just testing how a car behaves if somebody cuts it off, you test what it does if it gets cut off with a gap of 1cm, or 10cm, or 1m, or 2m, and by different types of vehicles, and by two at once etc. etc. etc. The nice thing about computers is you can test just about every variation you can think of, and test it in every road situation and every type of weather, at least if your simulator is good enough,

Yoav Hollander, who I met when he came as a student to the program at Singularity U, wrote a report on the approaches to testing he saw at the conference that contains useful insights, particularly on this question of new and old thinking, and what regulations drive vs. liability and fear of the public. He puts it well — traditional and certification oriented testing has a focus on assuring you don’t have “expected bugs” but is poor at finding unexpected ones. Other testing is about finding unexpected bugs. Expected bugs are of the “we’ve seen this sort of thing before, we want to be sure you don’t suffer from it” kind. Unexpected bugs are “something goes wrong that we didn’t know to look for.”

Avoiding old thinking

I believe that we are far from done on the robocar safety question. I think there are startups who have not yet been founded who, in the future, will come up with new techniques both for promoting safety and testing it that nobody has yet thought of. As such, I strongly advise against thinking that we know very much about how to do it yet.

A classic example of things going wrong is the movement towards “explainable AI.” Here, people are concerned that we don’t really know how “black box” neural network tools make the decisions they do. Car regulations in Europe are moving towards banning software that can’t be explained in cars. In the USA, the draft NHTSA regulations also suggest the same thing, though not as strongly.

We may find ourselves in a situation where we take to systems for robocars, one explainable and the other not. We put them through the best testing we can, both in simulator and most importantly in the real world. We find the explainable system has a “safety incident” every 100,000 miles, and the unexplainable system has an incident every 150,000 miles. To me it seems obvious that it would be insane to make a law that demands the former system which, when deployed, will hurt more people. We’ll know why it hurt them. We might be better at fixing the problems, but we also might not — with the unexplainable system we’ll be able to make sure that particular error does not happen again, but we won’t be sure that others very close it it are eliminated.

Testing in sim is a challenge here. In theory, every car should get no errors in sim, because any error found in sim will be fixed or judged as not really an error or so rare as to be unworthy of fixing. Even trained machine learning systems will be retrained until they get no errors in sim. The only way to do this sort of testing in sim will be to have teams generate brand new scenarios in sim that the cars have never seen, and see how they do. We will do this, but it’s hard. Particularly because as the sims get better, there will be fewer and fewer real world situations they don’t contain. At best, the test suite will offer some new highly unusual situations, which may not be the best way to really judge the quality of the cars. In addition, teams will be willing to pay simulator companies well for new and dangerous scenarios in sim for their testing — more than the government agencies will pay for such scenarios. And of course, once a new scenario displays a problem, every customer will fix it and it will become much less valuable. Eventually, as government regulations become more prevalent, homologation companies will charge to test your compliance rate on their test suites, but again, they will need to generate a new suite every time since everybody will want the data to fix any failure. This is not like emissions testing, where they tell you that you went over the emissions limit, and it’s worth testing the same thing again.

The testing was interesting, but my other main focus was on the connected car and security sessions. More on that to come.

The Robot Academy: Lessons in inverse kinematics and robot motion

The Robot Academy is a new learning resource from Professor Peter Corke and the Queensland University of Technology (QUT), the team behind the award-winning Introduction to Robotics and Robotic Vision courses. There are over 200 lessons available, all for free.

The lessons were created in 2015 for the Introduction to Robotics and Robotic Vision courses. We describe our approach to creating the original courses in the article, An Innovative Educational Change: Massive Open Online Courses in Robotics and Robotic Vision. The courses were designed for university undergraduate students but many lessons are suitable for anybody, as you can easily see the difficulty rating for each lesson. Below are lessons from inverse kinematics and robot motion.

You can watch the entire masterclass on the Robot Academy website.

Introduction

In this video lecture, we will learn about inverse kinematics, that is, how to compute the robot’s joint angles given the desired pose of their end-effector and knowledge about the dimensions of its links. We will also learn about how to generate paths that lead to a smooth coordinated motion of the end-effector.

Inverse kinematics for a 2-joint robot arm using geometry

In this lesson, we revisit the simple 2-link planar robot and determine the inverse kinematic function using simple geometry and trigonometry.

Inverse kinematics for a 2-joint robot arm using algebra

You can watch the entire masterclass on the Robot Academy website.

If you liked this article, you may also enjoy:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

A robotic doctor is gearing up for action

A new robot under development can send information on the stiffness, look and feel of a patient to a doctor located kilometres away. Image credit: Accrea

A robotic doctor that can be controlled hundreds of kilometres away by a human counterpart is gearing up for action. Getting a check-up from a robot may sound like something from a sci-fi film, but scientists are closing in on this real-life scenario and have already tested a prototype.

‘The robot at the remote site has different force, humidity and temperature sensors, all capturing information that a doctor would get when they are directly palpating (physically examining) a patient,’ explains Professor Angelika Peer, a robotics researcher at the University of the West of England, UK.

Prof. Peer is also the project coordinator of the EU-funded ReMeDi project, which is developing the robotic doctor to allow medical professionals to examine patients over huge distances.

Through a specially designed surface mounted on a robotic arm, stiffness data of the patient’s abdomen is displayed to the human, allowing the doctor to feel what the remote robot feels. This is made possible thanks to a tool called a haptic device, which has a soft surface reminiscent of skin that can recreate the sense of touch through force and changing its shape.

During the examination, the doctor sits at a desk facing three screens, one showing the doctor’s hand on the faraway patient and a second for teleconferencing with the patient, which will remain an essential part of the exchange.

The third screen displays a special capability of the robot doctor – ultrasonography. This is a medical technique that sends sound pulses into a patient’s body to create a window into the patient. It reveals areas of different densities in the body and is often used to examine pregnant women.

Ultrasonography is also important for flagging injuries or disease in organs such as the heart, liver, kidneys or spleen and can find indications for some types of cancer, too.

‘The system allows a doctor from a remote location to do a first assessment of a patient and make a decision about what should be done, whether to transfer them to hospital or undergo certain treatments,’ said Prof. Peer.

The robot currently resides in a hospital in Poland but scientists have shown the prototype at medical conferences around the world. And they have already been approached by doctors from Australia and Canada where it can take several hours to transfer rural patients to a doctor’s office or hospital.

With the help of a robot, a doctor can talk to a patient, manoeuvre robotic arms, feel what the robot senses and get ultrasounds. Image credit: ReMeDi

‘This is to support an initial diagnosis. The human is still in the loop, but this allows them to perform an examination remotely,’ said Prof. Peer.

Telemedicine

The ReMeDi project could speed up a medical exam and save time for patients and clinics. Another EU-funded project – United4Health (U4H) – looks at a different technology that could be used to remotely diagnose or treat people.

‘We need to transform how we deliver health and care,’ said Professor George Crooks, director of the Scottish Centre for Telehealth & Telecare, UK, which provides services via telephone, web and digital television and coordinates U4H.

This approach is crucial as Europe faces an ageing population and a rise in long-term health conditions like diabetes and heart disease. Telemedicine empowers these types of patients to take steps to help themselves at home, while staying in touch with medical experts via technology. Previous studies showed those with heart failure can be successfully treated this way.

These patients were given equipment to monitor their vital signs and send data back to a hospital. A trial in the UK comparing this self-care group to the standard-care group showed a reduction in mortality, hospital admissions and bed days, says Prof. Crooks.

A similar result was shown in the demonstration sites of the U4H project which tested the telemedicine approach in 14 regions for patients with heart failure, diabetes and chronic obstructive pulmonary disease (COPD). For diabetic patients in Scotland, they kept in touch with the hospital using text messages. For COPD, some patients used video consultations.

Prof. Crooks stresses that it is not all about the electronics – what matters is the service wraparound that makes the technology acceptable and easy to use for patients and clinical teams.

‘It can take two or three hours out of your day to go along to a 15 minute medical appointment and then to be told to keep taking your medication. What we do is, by using technology, patients monitor their own parameters, such as blood sugar in the case of diabetes, how they are feeling, diet and so on, and then they upload these results,’ said Prof. Crooks.

‘It doesn’t mean you never go to see a doctor, but whereas you might have gone seven or eight times a year, you may go just once or twice.’

Crucially, previous research has shown these patients fare better and the approach is safe.

‘There can be an economic benefit, but really this is about saving capacity. It frees up healthcare professionals to see the more complex cases,’ said Prof. Crooks.

It also empowers patients to take more responsibility for their health and results in fewer unplanned visits to the emergency room.

‘Patient satisfaction rates were well over 90 %,’ said Prof. Crooks.

Using MATLAB for hardware-in-the-loop prototyping #1 : Message passing systems

MATLAB© is a programming language and environment designed for scientific computing. It is one of the best languages for developing robot control algorithms and is widely used in the research community. While it is often thought of as an offline programming language, there are several ways to interface with it to control robotic hardware ‘in the loop’. As part of our own development we surveyed a number of different projects that accomplish this by using a message passing system and we compared the approaches they took. This post focuses on bindings for the following message passing frameworks: LCM, ROS, DDS, and ZeroMQ.

The main motivation for using MATLAB to prototype directly on real hardware is to dramatically accelerate the development cycle by reducing the time it takes to find out out whether an algorithm can withstand ubiquitous real-world problems like noisy and poorly-calibrated sensors, imperfect actuator controls, and unmodeled robot dynamics. Additionally, a workflow that requires researchers to port prototype code to another language before being able to test on real hardware can often lead to weeks or months being lost in chasing down new technical bugs introduceed by the port. Finally, programming in a language like C++ can pose a significant barrier to controls engineers who often have a strong electro-mechanical background but are not as strong in computer science or software engineering.

We have also noticed that over the past few years several other groups in the robotics community also experience these problems and have started to develop ways to control hardware directly from MATLAB.

The Need for External Languages

The main limitation when trying to use MATLAB to interface with hardware stems from the fact that its scripting language is fundamentally single threaded. It has been designed to allow non-programmers to do complex math operations without needing to worry about programming concepts like multi-threading or synchronization. However, this poses a problem for real-time control of hardware because all communication is forced to happen synchronously in the main thread. For example, if a control loop runs at 100Hz and it takes a message ~8ms for a round-trip, the main thread ends up wasting 80% of the available time budget waiting for a response without doing any actual work.

A second hurdle is that while MATLAB is very efficient in the execution of math operations, it is not particularly well suited for byte manipulation. This makes it difficult to develop code that can efficiently create and parse binary message formats that the target hardware can understand. Thus, after having the main thread spend its time waiting for and parsing the incoming data, there may not be any time left for performing interesting math operations.

comms single threaded.png
Figure 1. Communications overhead in the main MATLAB thread

Pure MATLAB implementations can work for simple applications, such as interfacing with an Arduino to gather temperature data or blink an LED, but it is not feasible to control complex robotic systems (e.g. a humanoid) at high rates (e.g. 100Hz-1KHz). Fortunately, MATLAB does have the ability to interface with other programming languages that allow users to create background threads that can offload the communications aspect from the main thread.

comms multi threaded.png
Figure 2. Communications overhead offloaded to other threads

Out of the box MATLAB provides two interfaces to other languages: MEX for calling C/C++ code, and the Java Interface for calling Java code. There are some differences between the two, but at the end of the day the choice effectively comes down to personal preference. Both provide enough capabilities for developing sophisticated interfaces and have orders of magnitude better performance than required. There are additional interfaces to other languages, but those require additional setup steps.

Message Passing Frameworks

Message passing frameworks such as Robot Operating System (ROS) and Lightweight Communication and Marshalling (LCM) have been widely adopted in the robotics research community. At the core they typically consist of two parts: a way to exchange data between processes (e.g. UDP/TCP), as well as a defined binary format for encoding and decoding the messages. They allow systems to be built with distributed components (e.g. processes) that run on different computers, different operating systems, and different programming languages.

The resulting systems are very extensible and provide convenient ways for prototyping. For example, a component communicating with a physical robot can be exchanged with a simulator without affecting the rest of the system. Similarly, a new walking controller could be implemented in MATLAB and communicate with external processes (e.g. robot comms) through the exchange of messages. With ROS and LCM in particular, their flexibility, wide-spread adoption, and support for different languages make them a nice starting point for a MATLAB-hardware interface.

Lightweight Communication and Marshalling (LCM)

LCM was developed in 2006 at MIT for their entry to DARPA’s Urban Challenge. In recent years it has become a popular alternative to ROS-messaging, and it was as far as we know the first message passing framework for robotics that supported MATLAB as a core language.

The snippet below shows how the MATLAB code for sending a command message could look like. The code creates a struct-like message, sets desired values, and publishes it on an appropriate channel.

%% MATLAB code for sending an LCM message
% Setup
lc = lcm.lcm.LCM.getSingleton();

% Fill message
cmd = types.command();
cmd.position = [1 2 3];
cmd.velocity = [1 2 3];

% Publish
lc.publish('COMMAND_CHANNEL', cmd);

Interestingly, the backing implementation of these bindings was done in pure Java and did not contain any actual MATLAB code. The exposed interface consisted of two Java classes as well as auto-generated message types.

  • The LCM class provides a way to publish messages and subscribe to channels
  • The generated Java messages handle the binary encoding and exposed fields that MATLAB can access
  • The MessageAggregator class provides a way to receive messages on a background thread and queue them for MATLAB.

Thus, even though the snippet looks similar to MATLAB code, all variables are actually Java objects. For example, the struct-like command type is a Java object that exposes public fields as shown in the snippet below. Users can access them the same way as fields of a standard MATLAB struct (or class properties) resulting in nice syntax. The types are automatically converted according to the type mapping.

/**
 * Java class that behaves like a MATLAB struct
 */
public final class command implements lcm.lcm.LCMEncodable
{
    public double[] position;
    public double[] velocity;
    // etc. ...
}

Receiving messages is done by subscribing an aggregator to one or more channels. The aggregator receives messages from a background thread and stores them in a queue that MATLAB can access in a synchronous manner using aggregator.getNextMessage(). Each message contains the raw bytes as well as some meta data for selecting an appropriate type for decoding.

%% MATLAB code for receiving an LCM message
% Setup
lc = lcm.lcm.LCM.getSingleton();
aggregator = lcm.lcm.MessageAggregator();
lc.subscribe('FEEDBACK_CHANNEL', aggregator);

% Continuously check for new messages
timeoutMs = 1000;
while true

    % Receive raw message
    msg = aggregator.getNextMessage(timeoutMs);

    % Ignore timeouts
    if ~isempty(msg)

        % Select message type based on channel name
        if strcmp('FEEDBACK_CHANNEL', char(msg.channel))

            % Decode raw bytes to a usable type
            fbk = types.feedback(msg.data);

            % Use data
            position = fbk.position;
            velocity = fbk.velocity;

        end

    end
end

The snippet below shows a simplified version of the backing Java code for the aggregator class. Since Java is limited to a single return argument, the getNextMessage call returns a Java type that contains the received bytes as well as meta data to identify the type, i.e., the source channel name.

/**
 * Java class for receiving messages in the background
 */
public class MessageAggregator implements LCMSubscriber {

    /**
     * Value type that combines multiple return arguments
     */
    public static class Message {

        final public byte[] data; // raw bytes
        final public String channel; // source channel name

        public Message(String channel_, byte[] data_) {
            data = data_;
            channel = channel_;
        }
    }

    /**
     * Method that gets called from MATLAB to receive new messages
     */
    public synchronized Message getNextMessage(long timeout_ms) {

		if (!messages.isEmpty()) {
		    return messages.removeFirst();
        }

        if (timeout_ms == 0) { // non-blocking
            return null;
        }

        // Wait for new message until timeout ...
    }

}

Note that the getNextMessage method requires a timeout argument. In general it is important for blocking Java methods to have a timeout in order to prevent the main thread from getting stuck permanently. Being in a Java call prohibits users from aborting the execution (ctrl-c), so timeouts should be reasonably short, i.e., in the low seconds. Otherwise this could cause the UI to become unresponsive and users may be forced to close MATLAB without being able to save their workspace. Passing in a timeout of zero serves as a non-blocking interface that immediately returns empty if no messages are available. This is often useful for working with multiple aggregators or for integrating asynchronous messages with unknown timing, such as user input.

Overall, we thought that this was a well thought out API and a great example for a minimum viable interface that works well in practice. By receiving messages on a background thread and by moving the encoding and decoding steps to the Java language, the main thread is able to spend most of its time on actually working with the data. Its minimalistic implementation is comparatively simple and we would recommend it as a starting point for developing similar interfaces.

Some minor points for improvement that we found were:

  • The decoding step fbk = types.feedback(msg.data) forces two unnecessary translations due to msg.data being a byte[], which automatically gets converted to and from int8. This could result in a noticeable performance hit when receiving larger messages (e.g. images) and could be avoided by adding an overload that accepts a non-primitive type that does not get translated, e.g., fbk = types.feedback(msg).
  • The Java classes did not implement Serializable, which could become bothersome when trying to save the workspace.
  • We would prefer to select the decoding type during the subscription step, e.g., lc.subscribe(‘FEEDBACK_CHANNEL’, aggregator, ‘types.feedback’), rather than requiring users to instantiate the type manually. This would clean up the parsing code a bit and allow for a less confusing error message if types are missing.

Robot Operating System (ROS)

ROS is by far the most widespread messaging framework in the robotics research community and has been officially supported by Mathworks’ Robotics System Toolbox since 2014. While the Simulink code generation uses ROS C++, the MATLAB implementation is built on the less common RosJava.

The API was designed such that each topic requires dedicated publishers and subscribers, which is different from LCM where each subscriber may listen to multiple channels/topics. While this may result in potentially more subscribers, the specification of the expected type at initialization removes much of the boiler plate code necessary for dealing with message types.

%% MATLAB code for publishing a ROS message
% Setup Publisher
chatpub = rospublisher('/chatter', 'std_msgs/String');

% Fill message
msg = rosmessage(chatpub);
msg.Data = 'Some test string';

% Publish
chatpub.send(msg);

Subscribers support three different styles to access messages: blocking calls, non-blocking calls, and callbacks.

%% MATLAB code for receiving a ROS message
% Setup Subscriber
laser = rossubscriber('/scan');

% (1) Blocking receive
scan = laser.receive(1); % timeout [s]

% (2) Non-blocking latest message (may not be new)
scan = laser.LatestMessage;

% (3) Callback
callback = @(msg) disp(msg);
subscriber = rossubscriber('/scan', @callback);

Contrary to LCM, all objects that are visible to users are actually MATLAB classes. Even though the implementation is using Java underneath, all exposed functionality is wrapped in MATLAB classes that hide all Java calls. For example, each message type is associated with a generated wrapper class. The code below shows a simplified example of a wrapper for a message that has a Name property.

%% MATLAB code for wrapping a Java message type
classdef WrappedMessage

    properties (Access = protected)
        % The underlying Java message object (hidden from user)
        JavaMessage
    end

    methods

        function name = get.Name(obj)
            % value = msg.Name;
            name = char(obj.JavaMessage.getName);
        end

        function set.Name(obj, name)
            % msg.Name = value;
            validateattributes(name, {'char'}, {}, 'WrappedMessage', 'Name');
            obj.JavaMessage.setName(name); % Forward to Java method
        end

        function out = doSomething(obj)
            % msg.doSomething() and doSomething(msg)
            try
                out = obj.JavaMessage.doSomething(); % Forward to Java method
            catch javaException
                throw(WrappedException(javaException)); % Hide Java exception
            end
        end

    end
end

Due to the implementation being closed-source, we were only able to look at the public toolbox files as well as the compiled Java bytecode. As far as we could tell they built a small Java library that wrapped RosJava functionality in order to provide an interface that is easier to call from MATLAB. Most of the actual logic seemed to be implemented in MATLAB code, but we also found several calls to various Java libraries for problems that would have been difficult to implement in pure MATLAB, e.g., listing networking interfaces or doing in-memory decompression of images.

Overall, we found that the ROS support toolbox looked very nice and was a great example of how seamless external languages could be integrated with MATLAB. We also really liked that they offered a way to load log files (rosbags).

One concern we had was that there did not seem to be a simple non-blocking way to check for new messages, e.g., a hasNewMessage() method or functionality equivalent to LCM’s getNextMessage(0). We often found this useful for applications that combined data from multiple topics that arrived at different rates (e.g. sensor feedback and joystick input events). We checked whether this behavior could be emulated by specifying a very small timeout in the receive method (shown in the snippet below), but any value below 0.1s seemed to never successfully return.

%% Trying to check whether a new message has arrived without blocking
try
msg = sub.receive(0.1); % below 0.1s always threw an error
% ... use message ...
catch ex
% ignore
end

Data Distribution Service (DDS)

In 2014 Mathworks also added a support package for DDS, which is the messaging middleware that ROS 2.0 is based on. It supports MATLAB and Simulink, as
well as code generation. Unfortunately, we did not have all the requirements to get it setup, and we could not find much information about the underlying implementation. After looking at some of the intro videos, we believe that the resulting code should look as follows.

%% MATLAB code for sending and receiving DDS messages
% Setup
DDS.import('ShapeType.idl','matlab');
dp = DDS.DomainParticipant

% Create message
myTopic = ShapeType;
myTopic.x = int32(23);
myTopic.y = int32(35);

% Send Message
dp.addWriter('ShapeType', 'Square');
dp.write(myTopic);

% Receive message
dp.addReader('ShapeType', 'Square');
readTopic = dp.read();

ZeroMQ

ZeroMQ is another asynchonous messaging library that is popular for building distributed systems. It only handles the messaging aspect, so users need to supply their own wire format. ZeroMQ-matlab is a MATLAB interface to ZeroMQ that was developed at UPenn between 2013-2015. We were not able to find much documentation, but as far as we could tell the resulting code should look similar to following snippet.

%% MATLAB code for sending and receiving ZeroMQ data
% Setup
subscriber = zmq( 'subscribe', 'tcp', '127.0.0.1', 43210 );
publisher = zmq( 'publish', 'tcp', 43210 );

% Publish data
bytes = uint8(rand(100,1));
nbytes = zmq( 'send', publisher, bytes );

% Receive data
receiver = zmq('poll', 1000); // polls for next message
[recv_data, has_more] = zmq( 'receive', receiver );

disp(char(recv_data));

It was implemented as a single MEX function that selects appropriate sub-functions based on a string argument. State was maintained by using socket IDs that were passed in by the user at every call. The code below shows a simplified snippet of the send action.

// Parsing the selected ZeroMQ action behind the MEX barrier
// Grab command String
if ( !(command = mxArrayToString(prhs[0])) )
	mexErrMsgTxt("Could not read command string. (1st argument)");

// Match command String with desired action (e.g. 'send')
if (strcasecmp(command, "send") == 0){
	// ... (argument validation)

	// retrieve arguments
	socket_id = *( (uint8_t*)mxGetData(prhs[1]) );
	size_t n_el = mxGetNumberOfElements(prhs[2]);
	size_t el_sz = mxGetElementSize(prhs[2]);
	size_t msglen = n_el*el_sz;

	// send data
	void* msg = (void*)mxGetData(prhs[2]);
	int nbytes = zmq_send( sockets[ socket_id ], msg, msglen, 0 );

	// ... check outcome and return
}
// ... other actions

Other Frameworks

Below is a list of APIs to other frameworks that we looked at but could not cover in more detail.

Project Notes

Simple Java wrapper for RabbitMQ with callbacks into MATLAB

Seems to be deprecated

Final Notes

Contrary to the situation a few years ago, nowadays there exist interfaces for most of the common message passing frameworks that allow researchers to do at least basic hardware-in-the-loop prototyping directly from MATLAB. However, if none of the available options work for you and you are planning on developing your own, we recommend the following:

  • If there is no clear pre-existing preference between C++ and Java, we recommend to start with a Java implementation. MEX interfaces require a lot of conversion code that Java interfaces would handle automatically.
  • We would recommend starting with a minimalstic LCM-like implementation and then add complexity when necessary.
  • While interfaces that only expose MATLAB code can provide a better and more consistent user experience (e.g. help documentation), there is a significant cost associated with maintaing all of the involved layers. We would recommend holding off on creating MATLAB wrappers until the API is relatively stable.

Finally, even though message passing systems are very widespread in the robotics community, they do have drawbacks and are not appropriate for every application. Future posts in this series will focus on some of the alternatives.

Snake robots slither into our hearts, literally

Snake robot at the Robotics institute. Credit: Jiuguang Wang/Flickr

The biblical narrative of the Garden of Eden describes how the snake became the most cursed of all beasts: “you shall walk on your belly, and you shall eat dust all the days of your life.” The reptile’s eternal punishment is no longer feared but embraced for its versatility and flexibility. The snake is fast approaching as one of the most celebrated robotic creatures for roboticists worldwide in out maneuvering rovers and humanoids.

Last week, while General Electric experienced a tumult in its management structure, its Aviation unit completed the acquisition of OC Robotics – a leader in serpent arm design. GE stated that it believes OC’s robots will be useful for jet engine maintenance, enabling repairs to be conducted while the engine is still attached to the wing by wiggling into parts where no human hand could survive. This promise translates into huge cost and time savings for maintenance and airline companies alike.

OC robots have use cases beyond avionics, including inspections of underground drilling and directional borings tens of feet below the Earth. In addition to acquiring visual data, OC’s snake is equipped with a high-pressure water jet and laser to measures the sharpness of the cutting surface. According to OC’s founder Andrew Graham, “This is faster and easier, and it keeps people safe. ” Graham seems to hit on the holy grail of robotics by combining profit and safety.

GE plans to expand the use case for its newest company. Lance Herrington, a leader at GE Aviation Services, says “Aviation applications will just be the starting point for this incredible technology.” Herrington implied that the snake technology could be adopted in the future to inspect power plants, trains, and even healthcare robots. As an example of its versatility, OC Robotics was awarded a prestigious prize by the U.K.’s Nuclear Decommissioning Authority for its LaserSnake. OC’s integrated snake-arm laser cutter was able to disassemble toxic parts of a nuclear fuel processing facility in a matter of weeks which would have taken years by humans while risking radiation exposure.

One of the most prolific inventors of robotic snake applications is Dr. Howie Choset of Carnegie Mellon University. Choset is the co-director of CMU’s Biorobotics Lab that has birthed severals startups based upon his snake technology, including Medrobotics (surgical systems); Hebi Robotics (actuators for modular robots); and Bito Robotics‘ (autonomous vehicles). Choset claims that his menagerie of metal reptiles are perfect for urban search and rescue, infrastructure repairs and medicine.

Source: Medorobotics

Recently, Medrobotics received FDA Clearance for its Flex Robotic System for colorectal procedures in the United States. According to the company’s press release, “Medrobotics is the first and only company to offer minimally invasive, steerable and shapeable robotic products for colorectal procedures in the U.S.” The Flex system promises a “scarfree” experience in accessing “hard-to-reach anatomy” that is just not possible with straight, rigid instruments.

“The human gastrointestinal system is full of twists and turns, and rigid surgical robots were not designed to operate in that environment. The Flex® Robotic System was. Two years ago Medrobotics started revolutionizing treatment in the head and neck in the U.S. We can now begin doing that in colorectal procedures,” said Dr. Samuel Straface, CEO.

Dr. Alessio Pigazzi, Professor of Surgery at the University of California, Irvine, exclaimed that “Medrobotics is ushering in the first of a new generation of shapeable and steerable robotic surgical systems that offer the potential to reduce the invasiveness of surgical procedures for more patients.” While Medrobotics’ system is currently only approved for use through the mouth and anus, Pigazzi looks forward to future applications whereby any natural orifices could be an entry point for true incision-less surgery.

The Technion ‘Snake Robot’. Photo: Kobi Gideon/GPO

Medrobotics was the brainchild of the collaboration of Choset with Dr. Alon Wolf of Israel’s prestigious Technion Institute of Technology. One of the earliest use cases for snake robots was by Wolf’s team in 2009 for military surveillance. As director of Technion’s BioRobotics and BioMechanics Laboratory (BRML) Wolf’s lab created the next generation of defensive snake robots for the latest terror threat, subterranean tunnels transporting suicide bombers and kidnappers.  Since the discovery of tunnels between the Gaza Strip and Israel in 2015, BRML has been working feverishly to deploy snake robots in the field of combat.

The vision for BRML’s hyper-redundant robots is to utilize its highly maneuverable actuators to sneak through tough terrain into tunnels and buildings. Once inside, the robot will provide instant scans of the environment to the command center and then leave behind sensors for continued surveillance. The robots are equipped with an array of sensors, including thermal imagers, miniature cameras, laser scanners, and laser radar with the ability of stitching seamlessly 360-degree views and maps of the targeted subterranean area. The robots, of course, would have dual uses for search & rescue and disaster recovery efforts.

Long term, Wolf would like to deploy his fleet of crawlers in search and rescue missions in urban locations and earthquakes.

“The robots we are creating at the Technion are extremely flexible and are able to manipulate delicate objects and navigate around walls. Over 400 rescue workers were killed during 9/11 because of the dangerous and unstable environment they were attempting to access and our objective is to ensure that robots are able to replace humans in such precarious situations,” explains Wolf.

It is no wonder why on his last visit to Israel, President Obama called Wolf’s vision “inspiring.”

Spider webs as computers

Spiders are truly amazing creatures. They have evolved over more than 200 million years and can be found in almost every corner of our planet. They are one of the most successful animals. Not less impressive are their webs, highly intricate structures that have been optimised through evolution over approximately 100 million years with the ultimate purpose of catching prey.

However, interestingly, the closer you look at spiders’ webs the more details you can observe and the structures are much more complicated than one would expect from a simple snare. They are made of a variety of different types of silks, use water droplets to keep the tension [see: citation 4], and the structure is highly dynamic [see: citation 4]. Spider’s webs have a great deal more morphological complexity than what you would need to simply catch flies.

Since nature typically does not spoil resources the question arises: why are spiders’ webs so complex? Might they have other functionalities besides being a simple trap? One of the most interesting answers to this question is that spiders might use their webs as computational devices.

How does the spider use the web as a computer?

Despite the fact that most spiders have a lot of eyes (the majority has 8, but some have even up to 12), a lot of the spiders have bad sight. In order to understand what is going on in their webs, they use mechanoreceptors in their legs (called lyriforms) to “listen” to vibrations in the web.  Different species of spiders have different preferred places to sit and observe. While some can be found right at the center, others prefer to sit outside the actual web and to listen to one single thread. It is quite remarkable that based only on the information that comes through this single thread the spider seems to be able to deduce what is going on in their web and where these events are taking place.

For example, they need to know if there is a prey, like a fly, entangled in their web. Or if the vibrations are coming from a dangerous insect like a wasp and they should stay away. The web is also used to communicate with potential mates and the spider even excites the web and listens to the echo. This might be a way for the spider to check if threads are broken or if the tension in the web has to be increased.

From a computational point of view, the spider needs to classify different vibration pattern (e.g., prey vs predator vs mate) and to locate its origin (i.e., where the vibration started).

One way to understand how a spider’s web could help to carry out this computational functionality is the concept of morphological computation. This is a term that describes the understanding that mechanical structures all over in nature are carrying out useful computations. For example, they help to stabilise running, facilitate sensory data processing, and helps animals and plants to interact with complex and unpredictable environments.

One could say computation is outsourced to the physical body (e.g., from the brain to another part of the body).

From this point of view, the spider’s web can be seen as a nonlinear, dynamic filter. It can be understood as some kind of pre-processing unit that makes it easier for the animal to interpret the vibration signals. The web’s dynamic properties and its complex morphological structure mix vibration signals in a nonlinear fashion. It even has some memory. This can be easily seen by pinching the web. It responds with vibrations for some moments after the impact echoing the original input.  The web can also damp unwanted frequencies, which is crucial to get rid of noise. On the other hand, it might even be able to highlight other signals at certain frequencies that carry more relevant information about the events taking place on the web.

These are all useful computations and they make it easier for the spider to “read” and understand the vibration patterns. As a result, the brain of the animal has to do less work and it can concentrate on other tasks. In effect, the spider seems to devolve computation to the web.  This might be also the reason why spiders tend to their webs so intensively. They constantly observe it and adapt the tension if it has changed, e.g. due to change in humidity, and repair it as soon a thread is broken.

From spider webs to sensors

People have speculated for a while that spider webs might have additional functionalities. A great article that discusses that is “The Thoughts of a Spiderweb“.

However, nobody so far has systematically looked into the actual computational capabilities of the web. This is about to change. We recently started a Leverhulme Trust Research project that will investigate naturally spun spider webs of different species to understand how and which computing might take place in these structures.  Moreover, the project does not only try to understand the underlying computational principles but will also develop morphological computation-based sensor technology to measure flow and vibrations.

The project combines our research expertise in Morphological Computation at the University of Bristol and the expertise on spider webs at the Silk Group in Oxford.

In experimental setups we will use solenoids and laser Doppler vibrometers to measure vibrations in the web with very high precision. The goal is to understand how computation is carried out. We will systematically investigate how filtering capabilities, memory, and signal integration can happen in such structures. In parallel, we will develop a general simulation environment for vibrating structures. We will use this to ask specific questions about how different shapes and materials others than spider webs and silk can help to carry out computations. In addition, we will develop real prototypes of vibration and flow sensors, which will be inspired by these findings. It’s very likely they will look different from spider webs and they will use various types of materials.

Such sensors can be used in various applications. For example, morphological computation based flow sensors could be used to detect anomalies in the flow in tubes. Or vibration sensors put at strategic places on buildings could be able to detect earthquakes or structural failure. Also highly dynamic machines, for example, a wind turbine, could be monitored by such sensors to predict failure.

Ultimately, the project will provide not only a new technology to build sensors, but we hope also to get a fundamental understanding how spiders use their webs for computation.

References

[1] Hauser, H.; Ijspeert, A.; Füchslin, R.; Pfeifer, R. & Maass, W.”Towards a theoretical foundation for morphological computation with compliant bodies.”Biological Cybernetics, Springer Berlin / Heidelberg, 2011, 105, 355-370

[2]  Hauser, H.; Ijspeert, A.; Füchslin, R.; Pfeifer, R. & Maass, W. “The role of feedback in morphological computation with compliant bodies”. Biological Cybernetics, Springer Berlin / Heidelberg, 2012, 106, 595-613

[3] Hauser, H.; Füchslin, R.M.; Nakajima, K.“Morphological Computation – The Physical Body as a Computational Resource” Opinions and Outlooks on Morphological Computation, editors Hauser, H.; Füchslin, R.M. and Pfeifer, R., Chapter 20, pp 226-244,  2014, ISBN 978-3-033-04515-6

[4] Mortimer, B., Gordon, S. D., Holland, C., Siviour, C. R., Vollrath, F. and Windmill, J. F. C. (2014), The Speed of Sound in Silk: Linking Material Performance to Biological Function. Adv. Mater., 26: 5179–5183. doi:10.1002/adma.201401027

Drones that drive

Image: Alex Waller, MIT CSAIL

Being able to both walk and take flight is typical in nature – many birds, insects and other animals can do both. If we could program robots with similar versatility, it would open up many possibilities: picture machines that could fly into construction areas or disaster zones that aren’t near roads, and then be able to squeeze through tight spaces to transport objects or rescue people.

The problem is that usually robots that are good at one mode of transportation are, by necessity, bad at another. Drones are fast and agile, but generally have too limited of a battery life to travel for long distances. Ground vehicles, meanwhile, are more energy efficient, but also slower and less mobile.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are aiming to develop robots that can do both. In a new paper, the team presented a system of eight quadcopter drones that can both fly and drive through a city-like setting with parking spots, no-fly zones and landing pads.

“The ability to both fly and drive is useful in environments with a lot of barriers, since you can fly over ground obstacles and drive under overhead obstacles,” says PhD student Brandon Araki, lead author on a paper about the system out of CSAIL director Daniela Rus’ group. “Normal drones can’t maneuver on the ground at all. A drone with wheels is much more mobile while having only a slight reduction in flying time.”

Araki and Rus developed the system along with MIT undergraduate students John Strang, Sarah Pohorecky and Celine Qiu, as well as Tobias Naegeli of ETH Zurich’s Advanced Interactive Technologies Lab. The team presented their system at IEEE’s International Conference on Robotics and Automation (ICRA) in Singapore earlier this month.

How it works

The project builds on Araki’s previous work developing a “flying monkey” robot that crawls, grasps, and flies. While the monkey robot could hop over obstacles and crawl about, there was still no way for it to travel autonomously.

To address this, the team developed various “path-planning” algorithms aimed at ensuring that the drones don’t collide. To make them capable of driving, the team put two small motors with wheels on the bottom of each drone. In simulations the robots could fly for 90 meters or drive for 252 meters before their batteries ran out.

Adding the driving component to the drone slightly reduced its battery life, meaning that the maximum distance it could fly decreased 14 percent to about 300 feet. But since driving is still much more efficient than flying, the gain in efficiency from driving more than offsets the relatively small loss in efficiency in flying due to the extra weight.

“This work provides an algorithmic solution for large-scale, mixed-mode transportation and shows its applicability to real-world problems,” says Jingjin Yu, a computer science professor at Rutgers University who was not involved in the paper.

The team also tested the system using everyday materials like pieces of fabric for roads and cardboard boxes for buildings. They tested eight robots navigating from a starting point to an ending point on a collision-free path, and all were successful.

Rus says that systems like theirs suggest that another approach to creating safe and effective flying cars is not to simply “put wings on cars,” but to build on years of research in drone development to add driving capabilities to them.

“As we begin to develop planning and control algorithms for flying cars, we are encouraged by the possibility of creating robots with these capabilities at small scale,” says Rus. “While there are obviously still big challenges to scaling up to vehicles that could actually transport humans, we are inspired by the potential of a future in which flying cars could offer us fast, traffic-free transportation.”

Click here to read the paper.

The Drone Center’s Weekly Roundup: 6/24/17

Amazon’s “beehive” concept for future multi-storey fulfillment centers. Credit: Amazon

June 19, 2017 – June 25, 2017

At the Center for the Study of the Drone

In an interview with Robotics Tomorrow, Center for the Study of the Drone Co-Director Arthur Holland Michel discusses the growing use of drones by law enforcement and describes future trends in unmanned systems technology.

News

The U.S. State Department is set to approve the sale of 22 MQ-9B Guardian drones to India, according to Defense News. The sale is expected to be announced during Prime Minister Narendra Modi’s visit to the United States. The Guardian is an unarmed variant of the General Atomics Aeronautical Systems Predator B. If the deal is approved and finalized, India would be the fifth country besides the U.S. and first non-NATO member to operate the MQ-9.

The United States shot down another armed Iranian drone in Syria. A U.S. F-15 fighter jet intercepted the Shahed-129 drone near the town of Tanf, where the U.S.-led coalition is training Syrian rebel forces. The shootdown comes just days after the U.S. downed another Shahed-129 on June 8, as well as a Syrian SU-22 manned fighter jet on June 18. (Los Angeles Times)

Meanwhile, a spokesperson for Pakistan’s Ministry of Foreign Affairs confirmed that the Pakistani air force shot down an Iranian drone. According to Nafees Zakaria, the unarmed surveillance drone was downed 2.5 miles inside Pakistani territory in the southwest Baluchistan province. (Associated Press)

A U.S. Air Force RQ-4 Global Hawk drone crashed in the Sierra Nevada mountains in California. The RQ-4 is a high-altitude long-endurance surveillance drone. (KTLA5)

The U.S. House of Representatives and Senate introduced bills to reauthorize funding for the Federal Aviation Administration. Both bills include language on drones. The Senate bill would require all drone operators to pass an aeronautical knowledge test and would authorize the FAA to require that drone operators be registered. (Law360)

President Trump spoke with the CEOs of drone companies at the White House as part of a week focused on emerging technologies. Participants discussed a number of topics, including state and local drone laws and drone identification and tracking technologies. (TechCrunch)

The Pentagon will begin offering an award for remote weapons strikes to Air Force personnel in a variety of career fields, including cyber and space. The “R” device award was created in 2016 to recognize drone operators. (Military.com)

The U.S. Federal Aviation Administration has formed a committee to study electronic drone identification methods and technologies. The new committee is comprised of representatives from industry, government, and law enforcement. (Press Release)

Commentary, Analysis, and Art

At MarketWatch, Sally French writes that in the meeting at the White House, some CEOs of drone companies argued for more, not fewer, drone regulations. (MarketWatch)

At Air & Space Magazine, James R. Chiles writes that the crowded airspace above Syria could lead to the first drone-on-drone air war.

At Popular Science, Kelsey D. Atherton looks at how fighter jets of the future will be accompanied by swarms of low-cost armed drones.  

At Drone360, Leah Froats breaks down the different drone bills that have recently been introduced in Congress.

At Motherboard, Ben Sullivan writes that drone pilots are “buying Russian software to hack their way past DJI’s no fly zones.”

At Bloomberg Technology, Thomas Black writes that the future of drone delivery hinges on precise weather predictions.

At Aviation Week, James Drew writes that U.S. lawmakers are encouraging the Air Force to conduct a review of the different MQ-9 Reaper models that it plans to purchase.  

Also at Aviation Week, Tony Osborne writes that studies show that European governments are advancing the implementation of drone regulations.

At The Atlantic, Marina Koren looks at how artificial intelligence helps the Curiosity rover navigate the surface of Mars without any human input.

At Phys.org, Renee Cho considers how drones are helping advance scientific research.

At Ozy, Zara Stone writes that drones are helping to accelerate the time it takes to complete industrial painting jobs.

At the European Council on Foreign Relations, Ulrike Franke argues that instead of following the U.S. example, Europe should develop its own approach to acquiring military drones.

At the New York Times, Frank Bures looks at how a U.S. drone pilot is helping give the New Zealand team an edge in the America’s Cup.

At Cinema5D, Jakub Han examines how U.S. drone pilot Robert Mcintosh created an intricate single-shot fly-through video in Los Angeles.

Know Your Drone

Amazon has filed a patent for multi-storey urban fulfilment centers for its proposed drone delivery program. (CNN)

Airbus Helicopters has begun autonomous flight trials of its VSR700 optionally piloted helicopter demonstrator. (Unmanned Systems Technology)

Italian defense firm Leonardo unveiled the M-40, a target drone that can mimic the signatures of a number of aircraft types. (FlightGlobal)

Defense firm Textron Systems unveiled the Nightwarden, a new variant of its Shadow tactical surveillance and reconnaissance drone. (New Atlas)

Israeli defense firm Elbit Systems unveiled the SkEye, a wide-area persistent surveillance sensor that can be used aboard drones. (IHS Jane’s 360)

Researchers at the University of California, Santa Barbara have developed a WiFi-based  system that allows drones to see through solid walls. (TechCrunch)

Israeli drone maker Aeronautics unveiled the Pegasus 120, a multirotor drone designed for a variety of roles. (IHS Jane’s 360)  

U.S. firm Raytheon has developed a new variant of its Coyote, a tube-launched aerial data collection drone. (AIN Online)

Drone maker Boeing Insitu announced that it has integrated a 50-megapixel photogrammetric camera into a variant of its ScanEagle fixed-wing drone. (Unmanned Systems Technology)

Telecommunications giant AT&T is seeking to develop a system to mount drones on ground vehicles. (Atlanta Business Chronicle)

U.S. defense contractor Northrop Grumman demonstrated an unmanned surface vehicle in a mine-hunting exercise in Belgium. (AUVSI)

Israeli firm Rafael Advanced Defense Systems unveiled a new radar and laser-based counter-drone system called Drone Dome. (UPI)

French firm Reflet du Monde unveiled the RDM One, a small drone that can be flown at ranges of up to 300 kilometers thanks to a satellite link. (Defense News)

RE2 Robotics is helping the U.S. Air Force build robots that can take the controls of traditionally manned aircraft. (TechCrunch)

The U.S. Marine Corps is set to begin using its Nibbler 3D-printed drone in active combat zones in the coming weeks. (3D Printing Industry)

U.S. drone maker General Atomics Aeronautical Systems has completed a design review for its Advanced Cockpit Block 50 Ground Control Station for U.S. Air Force drones. (UPI)

Researchers at NASA’s Langley Research Center are developing systems for small drones that allows them to determine on their own if they are suffering from mechanical issues and find a place to land safely. (Wired)

The inventor of the Roomba robotic vacuum cleaner has unveiled an unmanned ground vehicle that autonomously finds and removes weeds from your garden. (Business Insider)

Drones at Work

A group of public safety agencies in Larimer County, Colorado have unveiled a regional drone program. (The Coloradoan)

Five marijuana growing operations in California will begin using unmanned ground vehicles for security patrols. (NBC Los Angeles)

The Fargo Fire Department in North Dakota has acquired a drone for a range of operations. (KFGO)

The Rochester Police Department in Minnesota has acquired a drone for monitoring patients suffering from Alzheimer’s and other disorders. (Associated Press)

Drone maker Parrot and software firm Pix4D have selected six researchers using drones to study the impacts of climate change as the winners of an innovation grant. (Unmanned Aerial Online)

The Coconino County Sheriff’s Office and the Flagstaff Police Department used an unmanned ground vehicle to enter the home of a man who had barricaded himself in a standoff. (AZ Central)

Industry Intel

The U.S. Special Operations Command awarded Boeing Insitu and Textron Systems contracts to compete for the Mid-Endurance Unmanned Aircraft Systems III drone program. (AIN Online)

The U.S. Navy awarded Arête Associates a $8.5 million contract for the AN/DVS-1 COBRA, a payload on the MQ-8 Fire Scout. (DoD)

The U.S. Army awarded Raytheon a $2.93 million contract for Kinetic Drone Defense. (FBO)

The Spanish Defense Ministry selected the AUDS counter-drone system for immediate deployments. The contract is estimated to be worth $2.24 million. (GSN Magazine)

The European Maritime Safety Agency selected the UMS Skeldar for border control, search and rescue, pollution monitoring, and other missions. (FlightGlobal)

The Belgian Navy awarded SeeByte, a company that creates software for unmanned maritime systems, a contract for the SeeTrack software system for its autonomous undersea vehicles. (Marine Technology News)

A new company established by the Turkish government will build engines for the armed Anka drone. (DefenseNews)

Italian defense firm Leonardo is seeking to market its Falco UAV for commercial applications. (Shephard Media)  

Thales Alenia Space will acquire a minority stake in Airstar Aerospace, which it hopes will help it achieve its goal of developing an autonomous, high-altitude airship. (Intelligent Aerospace)

The Idaho STEM Action Center awarded 22 schools and libraries in Idaho $147,000 to purchase drones. (East Idaho News)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Survey: Examining perceptions of autonomous vehicles using hypothetical scenarios

Driverless car merging into traffic. How big of a gap between vehicles is acceptable? Image credit: Jordan Collver

I’m examining the perception of autonomous cars using hypothetical scenarios. Each of the hypothetical scenarios is accompanied with an image to help illustrate the scene — using grey tones and nondescript human-like features — along with the option to listen to the question spoken out loud to fully visualise an association. 

If you live in the UK, you can take this survey and help contribute to my research!

Public perception has the potential to impact on the timescale and adoption of autonomous vehicles (AV). As the development of the technology advances, understanding attitudes and wider public acceptability is critical. It’s no longer a question of if, but when we will transition. Long range autonomous vehicles are expected between 2020 and 2025, with some estimates suggesting fully autonomous vehicles will take over by 2030. Currently, most modern cars are sold with automated features: automatic braking, autonomous parking, advanced lane assist, advanced cruise control, queue assist, for example. Adopting fully AV has the potential to improve significant societal aspects: efficient road safety, reducing pollution and congestion, and providing another type of transportation for the mobility impaired.

The project’s aim is to add to the conversation about public perception of AV. Survey experiments can be extremely useful tools for studying public attitudes, especially if researchers are fascinated by the “effects of describing or presenting a scenario in a particular way.”  This unusual and creative method may provide a model for other types of research surveys in the future where it’s difficult to visualise future technologies. An online survey was chosen to remove small sample bias and maximise responses by participants in the UK.

You can take this survey by clicking above, or alternatively, click the following link:

https://uwe.onlinesurveys.ac.uk/visualise-this

CARNAC program researching autonomous co-piloting

Credit: Aurora Flight Sciences.

DARPA, the Defense Advanced Research Projects Agency, is researching autonomous co-piloting so they can fly without a human pilot on board. The robotic system — called the Common Aircraft Retrofit for Novel Autonomous Control (CARNAC) (not to be confused with the old Johnny Carson Carnac routine) — has the potential to reduce costs, enable new missions, and improve performance.

CARNAC, the Johnny Carson version.

Unmanned aircraft are generally built from scratch with robotic systems integrated from the earliest design stages. Existing aircraft require extensive modification to add robotic systems.

RE2, the CMU spin-off located in Pittsburgh, makes mobile manipulators for defense and space. They just received an SBIR loan backed by a US Air Force development contract to develop a retrofit kit that would provide a robotic piloting solution for legacy aircraft.

“Our team is excited to incorporate the Company’s robotic manipulation expertise with proven technologies in applique systems, vision processing algorithms, and decision making to create a customized application that will allow a wide variety of existing aircraft to be outfitted with a robotic pilot,” stated Jorgen Pedersen, president and CEO of RE2 Robotics. “By creating a drop-in robotic pilot, we have the ability to insert autonomy into and expand the capabilities of not only traditionally manned air vehicles, but ground and underwater vehicles as well. This application will open up a whole new market for our mobile robotic manipulator systems.”

Aurora Flight Sciences, a Manassas, VA developer of advanced unmanned systems and aerospace vehicles, is working on another similar DARPA project, Aircrew Labor In-Cockpit Automation System (ALIAS), and is designed as a drop-in avionics and mechanics package that can be quickly and cheaply fitted to a wide variety of fixed and rotor aircraft, from a Cessna to a B-52. Once installed, ALIAS is able to analyze the aircraft and adapt itself to the job of the second pilot.

Credit: Aurora Flight Sciences

Assistive robots compete in Bristol

The Bristol Robotics Laboratory (BRL) will host the first European- Commission funded European Robotics League (ERL) tournament for service robots to be held in the UK.

Two teams from the BRL and Birmingham will pitch their robots against each other in a series of events from 26 and 30 June.

Robots designed to support people with care-related tasks in the home will be put to the test in a simulated home test bed.

The assisted living robots of the two teams will face various challenges, including understanding natural speech and finding and retrieving objects for the user.

The robots will also have to greet visitors at the door appropriately, such as welcoming a doctor on their visit, or turning away unwanted visitors.

Associate Professor Praminda Caleb-Solly, Theme Leader for Assistive Robotics at the BRL said, “The lessons learned during the competition will contribute to how robots in the future help people, such as those with ageing-related impairments and those with other disabilities, live independently in their own homes for as long as possible.

“This is particularly significant with the growing shortage of carers available to provide support for an ageing populations.”

The BRL, the host of the UK’s first ERL Service Robots tournament, is a joint initiative of the University of the West of England and the University of Bristol. The many research areas include swarm robotics, unmanned aerial vehicles, driverless cars, medical robotics and robotic sensing for touch and vision. BRL’s assisted living research group is developing interactive assistive robots as part of an ambient smart home ecosystem to support independent living.

The ERL Service Robots tournament will be held in the BRL’s Anchor Robotics Personalised Assisted Living Studio, which was set up to develop, test and evaluate assistive robotic and other technologies in a realistic home environment.

The studio was recently certified as a test bed by the ERL, which runs alongside similar competitions for industrial robots and for emergency robots, which includes vehicles that can search for and rescue people in disaster-response scenarios.

The two teams in the Bristol event will be Birmingham Autonomous Robotics Club (BARC) led by Sean Bastable from the School of Computer Science at the University of Birmingham, and the Healthcare Engineering and Assistive Robotics Technology and Services (HEARTS) team from the BRL led by PhD Student Zeke Steer.

BARC has developed its own robotics platform, Dora, and HEARTS will use a TIAGo Steel robot from PAL Robotics with a mix of bespoke and proprietary software.

The Bristol event will be open for public viewing in the BRL on the afternoon of the 29th of June 2017 (Bookable via EventBrite), and include short tours of the assisted living studio for the attendees. It will be held during UK Robotics Week, on 24-30 June 2017, when there will be a nationwide programme of robotics and automation events.

The BRL will also be organising focus groups on 28 and 29 June 2017 (Bookable via EventBrite and here) as part of the UK Robotics Week, to demonstrate assistive robots and their functionality, and seek the views of carers and older adults on these assistive technologies, exploring further applications and integration of such robots into care scenarios.

The European Commission-funded European Robotics League (ERL) is the successor to the RoCKIn, euRathlon and EuRoC robotics competitions, all funded by the EU and designed to foster scientific progress and innovation in cognitive systems and robotics. The ERL is funded by the European Union’s Horizon 2020 research and innovation programme. See: https://www.eu-robotics.net/robotics_league/

The ERL is part of the SPARC public-private partnership set up by the European Commission and the euRobotics association to extend Europe’s leadership in civilian robotics. SPARC’s €700 million of funding from the Commission in 2014̶20 is being combined with €1.4 billion of funding from European industry. See: http://www.eu-robotics.net/sparc

euRobotics is a European Commission-funded non-profit organisation which promotes robotics research and innovation for the benefit of Europe’s economy and society. It is based in Brussels and has more than 250 member organisations. See: www.eu-robotics.net

Page 424 of 433
1 422 423 424 425 426 433