Archive 26.07.2017

Page 15 of 26
1 13 14 15 16 17 26

Folding robots: No battery, no wires, no problem

A magnetic folding robot arm can grasp and bend thanks to its pattern of origami-inspired folds and a wireless electromagnetic field. Credit: Wyss Institute at Harvard University

The traditional Japanese art of origami transforms a simple sheet of paper into complex, three-dimensional shapes through a very specific pattern of folds, creases, and crimps. Folding robots based on that principle have emerged as an exciting new frontier of robotic design, but generally require onboard batteries or a wired connection to a power source, making them bulkier and clunkier than their paper inspiration and limiting their functionality.

A team of researchers at the Wyss Institute for Biologically Inspired Engineering and the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University has created battery-free folding robots that are capable of complex, repeatable movements powered and controlled through a wireless magnetic field.

“Like origami, one of the main points of our design is simplicity,” says co-author Je-sung Koh, Ph.D., who conducted the research as a Postdoctoral Fellow at the Wyss Institute and SEAS and is now an Assistant Professor at Ajou University in South Korea. “This system requires only basic, passive electronic components on the robot to deliver an electric current—the structure of the robot itself takes care of the rest.”

The research team’s robots are flat and thin (resembling the paper on which they’re based) plastic tetrahedrons, with the three outer triangles connected to the central triangle by hinges, and a small circuit on the central triangle. Attached to the hinges are coils made of a type of metal called shape-memory alloy (SMA) that can recover its original shape after deformation by being heated to a certain temperature. When the robot’s hinges lie flat, the SMA coils are stretched out in their “deformed” state; when an electric current is passed through the circuit and the coils heat up, they spring back to their original, relaxed state, contracting like tiny muscles and folding the robots’ outer triangles in toward the center. When the current stops, the SMA coils are stretched back out due to the stiffness of the flexure hinge, thus lowering the outer triangles back down.

The power that creates the electrical current needed for the robots’ movement is delivered wirelessly using electromagnetic power transmission, the same technology inside wireless charging pads that recharge the batteries in cell phones and other small electronics. An external coil with its own power source generates a magnetic field, which induces a current in the circuits in the robot, thus heating the SMA coils and inducing folding. In order to control which coils contract, the team built a resonator into each coil unit and tuned it to respond only to a very specific electromagnetic frequency. By changing the frequency of the external magnetic field, they were able to induce each SMA coil to contract independently from the others.

“Not only are our robots’ folding motions repeatable, we can control when and where they happen, which enables more complex movements,” explains lead author Mustafa Boyvat, Ph.D., also a Postdoctoral Fellow at the Wyss Institute and SEAS.

Just like the muscles in the human body, the SMA coils can only contract and relax: it’s the structure of the body of the robot — the origami “joints” — that translates those contractions into specific movements. To demonstrate this capability, the team built a small robotic arm capable of bending to the left and right, as well as opening and closing a gripper around an object. The arm is constructed with a special origami-like pattern to permit it to bend when force is applied, and two SMA coils deliver that force when activated while a third coil pulls the gripper open. By changing the frequency of the magnetic field generated by the external coil, the team was able to control the robot’s bending and gripping motions independently.

There are many applications for this kind of minimalist robotic technology; for example, rather than having an uncomfortable endoscope put down their throat to assist a doctor with surgery, a patient could just swallow a micro-robot that could move around and perform simple tasks, like holding tissue or filming, powered by a coil outside their body. Using a much larger source coil — on the order of yards in diameter — could enable wireless, battery-free communication between multiple “smart” objects in an entire home. The team built a variety of robots — from a quarter-sized flat tetrahedral robot to a hand-sized ship robot made of folded paper — to show that their technology can accommodate a variety of circuit designs and successfully scale for devices large and small. “There is still room for miniaturization. We don’t think we went to the limit of how small these can be, and we’re excited to further develop our designs for biomedical applications,” Boyvat says.

“When people make micro-robots, the question is always asked, ‘How can you put a battery on a robot that small?’ This technology gives a great answer to that question by turning it on its head: you don’t need to put a battery on it, you can power it in a different way,” says corresponding author Rob Wood, Ph.D., a Core Faculty member at the Wyss Institute who co-leads its Bioinspired Robotics Platform and the Charles River Professor of Engineering and Applied Sciences at SEAS.

“Medical devices today are commonly limited by the size of the batteries that power them, whereas these remotely powered origami robots can break through that size barrier and potentially offer entirely new, minimally invasive approaches for medicine and surgery in the future,” says Wyss Founding Director Donald Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as a Professor of Bioengineering at Harvard’s School of Engineering and Applied Sciences.

The research paper was published in Science Robotics.

Stanford researchers develop a new type of soft, growing robot

By Taylor Kubota, Stanford News Service

Imagine rescuers searching for people in the rubble of a collapsed building. Instead of digging through the debris by hand or having dogs sniff for signs of life, they bring out a small, air-tight cylinder. They place the device at the entrance of the debris and flip a switch. From one end of the cylinder, a tendril extends into the mass of stones and dirt, like a fast-climbing vine. A camera at the tip of the tendril gives rescuers a view of the otherwise unreachable places beneath the rubble.

This is just one possible application of a new type of robot created by mechanical engineers at Stanford University, detailed in a June 19 Science Robotics paper. Inspired by natural organisms that cover distance by growing — such as vines, fungi and nerve cells — the researchers have made a proof of concept of their soft, growing robot and have run it through some challenging tests.

“Essentially, we’re trying to understand the fundamentals of this new approach to getting mobility or movement out of a mechanism,” explained Allison Okamura, professor of mechanical engineering and senior author of the paper. “It’s very, very different from the way that animals or people get around the world.”

To investigate what their robot can do, the group created prototypes that move through various obstacles, travel toward a designated goal, and grow into a free-standing structure. This robot could serve a wide range of purposes, particularly in the realms of search and rescue and medical devices, the researchers said.

A growing robot

The basic idea behind this robot is straightforward. It’s a tube of soft material folded inside itself, like an inside-out sock, that grows in one direction when the material at the front of the tube everts, as the tube becomes right-side-out. In the prototypes, the material was a thin, cheap plastic and the robot body everted when the scientists pumped pressurized air into the stationary end. In other versions, fluid could replace the pressurized air.

What makes this robot design extremely useful is that the design results in movement of the tip without movement of the body.

“The body lengthens as the material extends from the end but the rest of the body doesn’t move,” explained Elliot Hawkes, a visiting assistant professor from the University of California, Santa Barbara and lead author of the paper. “The body can be stuck to the environment or jammed between rocks, but that doesn’t stop the robot because the tip can continue to progress as new material is added to the end.”

Graduate students Joseph Greer, left, and Laura Blumenschein, right, work with Elliot Hawkes, a visiting assistant professor from the University of California, Santa Barbara, on a prototype of the vinebot. (Image credit: L.A. Cicero)

The group tested the benefits of this method for getting the robot from one place to another in several ways. It grew through an obstacle course, where it traveled over flypaper, sticky glue and nails and up an ice wall to deliver a sensor, which could potentially sense carbon dioxide produced by trapped survivors. It successfully completed this course even though it was punctured by the nails because the area that was punctured didn’t continue to move and, as a result, self-sealed by staying on top of the nail.

In other demonstrations, the robot lifted a 100-kilogram crate, grew under a door gap that was 10 percent of its diameter and spiraled on itself to form a free-standing structure that then sent out a radio signal. The robot also maneuvered through the space above a dropped ceiling, which showed how it was able to navigate unknown obstacles as a robot like this might have to do in walls, under roads or inside pipes. Further, it pulled a cable through its body while growing above the dropped ceiling, offering a new method for routing wires in tight spaces.

Difficult environments

“The applications we’re focusing on are those where the robot moves through a difficult environment, where the features are unpredictable and there are unknown spaces,” said Laura Blumenschein, a graduate student in the Okamura lab and co-author of the paper. “If you can put a robot in these environments and it’s unaffected by the obstacles while it’s moving, you don’t need to worry about it getting damaged or stuck as it explores.”

Some iterations of these robots included a control system that differentially inflated the body, which made the robot turn right or left. The researchers developed a software system that based direction decisions on images coming in from a camera at the tip of the robot.

A primary advantage of soft robots is that they can be safer than hard, rigid robots not only because they are soft but also because they are often lightweight. This is especially useful in situations where a robot could be moving in close quarters with a person. Another benefit, in the case of this robot, is that it is flexible and can follow complicated paths. This, however, also poses some challenges.

Joey Greer, a graduate student in the Okamura lab and co-author of the paper, said that controlling a robot requires a precise model of its motion, which is difficult to establish for a soft robot. Rigid robots, by comparison, are much easier to model and control, but are unusable in many situations where flexibility or safety is necessary. “Also, using a camera to guide the robot to a target is a difficult problem because the camera imagery needs to be processed at the rate it is produced. A lot of work went into designing algorithms that both ran fast and produced results that were accurate enough for controlling the soft robot,” Greer said.

Going big—and small

As it exists now, the scientists built the prototype by hand and it is powered through pneumatic air pressure. In the future, the researchers would like to create a version that would be manufactured automatically. Future versions may also grow using liquid, which could help deliver water to people trapped in tight spaces or to put out fires in closed rooms. They are also exploring new, tougher materials, like rip-stop nylon and Kevlar.

The vinebot is a tube of soft material that grows in one direction. (Image credit: L.A. Cicero)

The researchers also hope to scale the robot much larger and much smaller to see how it performs. They’ve already created a 1.8 mm version and believe small growing robots could advance medical procedures. In place of a tube that is pushed through the body, this type of soft robot would grow without dragging along delicate structures.

Okamura is a member of Stanford Bio-X and the Stanford Neurosciences Institute.

This research was funded by the National Science Foundation.

Indoor robots gaining momentum – and notoriety

Savioke Relay Dash delivery robot. Source: Savioke

Recent events demonstrate the growing presence of indoor mobile robots: (1) Savioke’s hotel butler robot won the 2017 IERA inventors award; (2) Knightscope’s security robot mistook a reflecting pond for a solid floor and dove in face-first to the delight of Twitterdom and the media; and (3) the sale of robotic hospital delivery provider Aethon to a Singaporean conglomerate.

Are we beginning to enter an era of multi-functional robots? Certainly that is the vision of each of the vendors listed below. They see their robots greet, assist and run errands during business hours and then, after hours, prowl and tally inventory and fixed assets, and all the while – 24/7 – check for anomalies and things that are suspicious. SuperRobot? Or one of the many new mobile service robots that offer each of these services as separate tasks? For example, Savioke, the hotel butler robot, is now using their Relay robots with FedEx in the warehousing and logistics sector.

The indoor robot marketplace

Travis Deyle, CEO of Silicon Valley startup Cobalt Robotics which is developing indoor robots for security purposes, in an article in IEEE Spectrum, posited that commercial spaces are the next big marketplace for robotics and that there’s a massive, untapped market in each of the commercial spaces shown in his chart below:

“Commercial spaces could serve as a great stepping stone on the path toward general-purpose home robots by driving scale, volume, and capabilities. So… while billions are being spent on R&D for autonomous vehicles, indoor robots for commercial and public spaces reap the technology and cost benefits on sensors, computing, machine learning, and open-source software.”

Although the chart above focuses on the many applications within the commercial space, there is also much activity in various forms of indoor material handling using mobile robots in warehouses and distribution centers. The list of companies in that marketplace is quite large and will be detailed in a future article.

Hospital mobile robot firm sells to Singaporean conglomerate

ST Engineering acquired Pittsburgh, PA-based hospital robotics firm Aethon for $36 million. Aethon provides intralogistics in hospital environments by delivering goods and supplies using its TUG autonomous mobile robots. ST Engineering's strategic reasoning for the acquisition can be understood by this statement about the purchase:

“We evaluated the autonomous mobile robotics market thoroughly. Our evaluation led us to conclude that Aethon was the best company in this space having the right technology along with proven success in the commercialization and installation of autonomous mobile robots,” said Khee Loon Foo, General Manager, Kinetics Advanced Robotics of ST Kinetics.

Hotel robot wins 2017 IERA Inventors Award

The International Federation of Robotics (IFR) and the IEEE Robotics and Automation Society (IEEE/RAS) jointly sponsor an annual IERA (Innovation and Entrepreneurship in Robotics and Automation) Award which this year was presented to the Relay butler robot made by Savioke, a Silicon Valley startup.

Savioke's Relay robot makes deliveries all on its own in hotels, hospitals or logistics centers. Thanks to artificial intelligence and sensor technology, the robot can move safely through public spaces and navigate around people and obstacles dynamically.

The robots, which have already completed over 100,000 deliveries, can be seen in selected hotels in California and New York, Asia and the Middle East.

Indoor Robot Companies

Listed below are a few of the companies in the emerging mobile robot indoor commercial marketplaces described in Deyle's chart above. The list is not comprehensive but intended to give you an overview of who those new companies are, how far along they are, and how global they are.

Indoor Security Robots:

Recent research reports covering the security robots marketplace forecast that the market will reach $2.4 billion by 2022 at a CAGR of 9% from now til then. These forecasts include indoor and outdoor robots. 

  • Knightscope is a Silicon Valley security robot startup with robots in shopping malls, exhibition halls, parking lots and office complexes. It was Knightscope's robot that took the face dive in the Washington, DC pond. [Graphic of Knightscope robot from Twitter.]
  • Cobalt Robotics, also a Silicon Valley security robotics startup, but, as described by co-founder Travis Deyle, “Security is just one entrée to the whole emerging world of indoor robotics.”
  • Gamma Two Robotics, a Colorado patrol robot maker, whose new Ramsee mobile robots have sensors for heat, toxic gas, motion detection and acoustic listening.
  • NxT Robotics is a San Diego mobile robot startup offering both an indoor (Iris) and outdoor (Scorpion) security patrol solution.
  • SMP Robotics is a San Francisco maker of mobile security robots for outdoor and indoor facilities.
  • Anbot (Hunan Wanwei Intelligent Robot Technology Co.) is a Chinese security robot with a robot very similar looking to Knightscope's. It can be seen prowling airport and museum public spaces in China.
  • Robot Security Systems is a Netherlands-based startup indoor mobile security robot provider.
  • China Security & Surveillance Technology also offers both indoor and outdoor security mobile robots.

Indoor Guides, Assistants, Greeters, Food Handlers and Gofor Robots:

This list could be much larger – particularly the gofor robots in the material handling field – but has been limited to those startups delivering product or with working prototypes focused on one or all of the commercial indoor market sectors. 

  • MetraLabs is a German provider of fully autonomous mobile inventory, public space guide and retail robots for stores, malls and museums.
  • PAL Robotics is a Spanish maker of humanoid robots used as guides, entertainers, information providers and presenters – in multiple languages.
  • Pangolin Robot is a Chinese maker of restaurant server/waiter/busing robots and which also has a line of greeting and delivery robots.
  • Simbe Robotics is a San Francisco provider of a retail space inventory robot auditing shelves for out-of-stock items, low stock items, misplaced items, and pricing errors. Simbe's Tally robot can perform during normal store hours alongside shoppers and employees or autonomously after hours.
  • Bossa Nova Robotics is a Pittsburgh developer of a store robot that scans products on the shelves, makes store maps and helps employees keep track of where items are located.
  • BlueBotics is a Swiss provider of mobile robots, robotic platforms and products for mobile guides, marketing assistants and industrial cleaning.
  • Pepper, the mobile emotion-detecting robot jointly produced by Foxconn, Alibaba and SoftBank, is serving as the first point of contact in coffee stores, banks, corporate offices and other public spaces.
  • Yujin Robot is a Korean consumer products maker with a line of hotel and restaurant delivery robots.
  • Fellow Robots is a Silicon Valley developer of the NAVii robot which is used as a greeter but also maps and performs inventory scans. 

ST Engineering acquires mobile robot maker Aethon for $36 million

Singapore Technologies Engineering Ltd (ST Engineering) has acquired Pittsburgh, PA-based robotics firm Aethon Inc through Vision Technologies Land Systems, Inc. (VTLS), and its wholly-owned subsidiary, VT Robotics, Inc, for $36 million.

The acquisition will be carried out by way of a merger with VT Robotics, a special newly incorporated entity established for the transaction. The merger will see Aethon as the surviving entity that will operate as a subsidiary of VTLS, and will be part of the the ST Group’s Land Systems sector. Aethon’s leadership team and employees will remain in place and the company will continue to operate out of its Pittsburgh, PA location.

ST Engineering, S63 on the Singapore Stock Exchange, is a Singapore-based integrated defense and engineering group focused in aerospace, electronics, and land, sea and air unmanned systems for the battlefield. It employs over 21,000 people and has annual revenues of around $5 billion.

“We evaluated the autonomous mobile robotics market thoroughly. Our evaluation led us to conclude that Aethon was the best company in this space having the right technology along with proven success in the commercialization and installation of autonomous mobile robots. We look forward to working with the Pittsburgh, PA team to grow the company,” says Khee Loon Foo, General Manager, Kinetics Advanced Robotics of ST Kinetics.

Aethon provides intralogistics in manufacturing and hospital environments by delivering goods and supplies using its TUG autonomous mobile robots. TUGs are self-driving autonomous robots capable of hauling or towing up to 1,400 lbs as it dynamically and safely navigates around people and the corridors of client facilities.

“This acquisition is a terrific event for our company, employees and our customers since it provides Aethon with the resources and corporate backing to grow and develop new innovative robotic technology and more aggressively pursue new markets. We will now be able to expand our development capabilities to enhance our current technology and bring exciting logistics solutions to new vertical and global markets,” says Aldo Zini, CEO of Aethon.

Does the next industrial revolution spell the end of manufacturing jobs?

By Jeff Morgan, Trinity College Dublin

Robots have been taking our jobs since the 1960s. So why are politicians and business leaders only now becoming so worried about robots causing mass unemployment?

It comes down to the question of what a robot really is. While science fiction has often portrayed robots as androids carrying out tasks in the much the same way as humans, the reality is that robots take much more specialised forms. Traditional 20th century robots were automated machines and robotic arms building cars in factories. Commercial 21st century robots are supermarket self-checkouts, automated guided warehouse vehicles, and even burger-flipping machines in fast-food restaurants.

Ultimately, humans haven’t become completely redundant because these robots may be very efficient but they’re also kind of dumb. They do not think, they just act, in very accurate but very limited ways. Humans are still needed to work around robots, doing the jobs the machines can’t and fixing them when they get stuck. But this is all set to change thanks to a new wave of smarter, better value machines that can adapt to multiple tasks. This change will be so significant that it will create a new industrial revolution.

The fourth industrial revolution.
Christoph Roser, CC BY-SA

Industry 4.0

This era of “Industry 4.0” is being driven by the same technological advances that enable the capabilities of the smartphones in our pockets. It is a mix of low-cost and high-power computers, high-speed communication and artificial intelligence. This will produce smarter robots with better sensing and communication abilities that can adapt to different tasks, and even coordinate their work to meet demand without the input of humans.

In the manufacturing industry, where robots have arguably made the most headway of any sector, this will mean a dramatic shift from centralised to decentralised collaborative production. Traditional robots focused on single, fixed, high-speed operations and required a highly skilled human workforce to operate and maintain them. Industry 4.0 machines are flexible, collaborative and can operate more independently, which ultimately removes the need for a highly skilled workforce.

 

For large-scale manufacturers, Industry 4.0 means their robots will be able to sense their environment and communicate in an industrial network that can be run and monitored remotely. Each machine will produce large amounts of data that can be collectively studied using what is known as “big data” analysis. This will help identify ways to improve operating performance and production quality across the whole plant, for example by better predicting when maintenance is needed and automatically scheduling it.

For small-to-medium manufacturing businesses, Industry 4.0 will make it cheaper and easier to use robots. It will create machines that can be reconfigured to perform multiple jobs and adjusted to work on a more diverse product range and different production volumes. This sector is already beginning to benefit from reconfigurable robots designed to collaborate with human workers and analyse their own work to look for improvements, such as BAXTER, SR-TEX and CareSelect.

Helping hands.
Rethink Robotics

While these machines are getting smarter, they are still not as smart as us. Today’s industrial artificial intelligence operates at a narrow level, which gives the appearance of human intelligence exhibited by machines, but designed by humans.

What’s coming next is known as “deep learning”. Similar to big data analysis, it involves processing large quantities of data in real time to make decisions about what is the best action to take. The difference is that the machine learns from the data so it can improve its decision making. A perfect example of deep learning was demonstrated by Google’s AlphaGo software, which taught itself to beat the world’s greatest Go players.

The turning point in applying artificial intelligence to manufacturing could come with the application of special microchips called graphical processing units (GPUs). These enable deep learning to be applied to extremely large data sets at extremely fast speeds. But there is still some way to go and big industrial companies are recruiting vast numbers of scientists to further develop the technology.

Impact on industry

As Industry 4.0 technology becomes smarter and more widely available, manufacturers of any size will be able to deploy cost-effective, multipurpose and collaborative machines as standard. This will lead to industrial growth and market competitiveness, with a greater understanding of production processes leading to new high-quality products and digital services.

Exactly what impact a smarter robotic workforce with the potential to operate on its own will have on the manufacturing industry, is still widely disputed. Artificial intelligence as we know it from science fiction is still in its infancy. It could well be the 22nd century before robots really have the potential to make human labour obsolete by developing not just deep learning but true artificial understanding that mimics human thinking.

Ideally, Industry 4.0 will enable human workers to achieve more in their jobs by removing repetitive tasks and giving them better robotic tools. In theory, this would allow us humans to focus more on business development, creativity and science, which it would be much harder for any robot to do. Technology that has made humans redundant in the past has forced us to adapt, generally with more education.

But because Industry 4.0 robots will be able to operate largely on their own, we might see much greater human redundancy from manufacturing jobs without other sectors being able to create enough new work. Then we might see more political moves to protect human labour, such as taxing robots.

The ConversationAgain, in an ideal scenario, humans may be able to focus on doing the things that make us human, perhaps fuelled by a basic income generated from robotic work. Ultimately, it will be up to us to define whether the robotic workforce will work for us, with us, or against us.

This article was originally published on The Conversation. Read the original article.

Reshaping computer-aided design

Adriana Schulz, an MIT PhD student in the Computer Science and Artificial Intelligence Laboratory, demonstrates the InstantCAD computer-aided-design-optimizing interface. Photo: Rachel Gordon/MIT CSAIL

Almost every object we use is developed with computer-aided design (CAD). Ironically, while CAD programs are good for creating designs, using them is actually very difficult and time-consuming if you’re trying to improve an existing design to make the most optimal product. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Columbia University are trying to make the process faster and easier: In a new paper, they’ve developed InstantCAD, a tool that lets designers interactively edit, improve, and optimize CAD models using a more streamlined and intuitive workflow.

 InstantCAD integrates seamlessly with existing CAD programs as a plug-in, meaning that designers don’t have to learn new tools to use it.

“From more ergonomic desks to higher-performance cars, this is really about creating better products in less time,” says Department of Electrical Engineering and Computer Science PhD student and lead author Adriana Schulz, who will be presenting the paper at this month’s SIGGRAPH computer-graphics conference in Los Angeles. “We think this could be a real game changer for automakers and other companies that want to be able to test and improve complex designs in a matter of seconds to minutes, instead of hours to days.”

The paper was co-written by Associate Professor Wojciech Matusik, PhD student Jie Xu, and postdoc Bo Zhu of CSAIL, as well as Associate Professor Eitan Grinspun and Assistant Professor Changxi Zheng of Columbia University.

Traditional CAD systems are “parametric,” which means that when engineers design models, they can change properties like shape and size (“parameters”) based on different priorities. For example, when designing a wind turbine you might have to make trade-offs between how much airflow you can get versus how much energy it will generate.

InstantCAD enables designers to interactively edit, improve, and optimize CAD models using a more streamlined and intuitive workflow. Photo: Rachel Gordon/MIT CSAIL

However, it can be difficult to determine the absolute best design for what you want your object to do, because there are many different options for modifying the design. On top of that, the process is time-consuming because changing a single property means having to wait to regenerate the new design, run a simulation, see the result, and then figure out what to do next.

With InstantCAD, the process of improving and optimizing the design can be done in real-time, saving engineers days or weeks. After an object is designed in a commercial CAD program, it is sent to a cloud platform where multiple geometric evaluations and simulations are run at the same time.

With this precomputed data, you can instantly improve and optimize the design in two ways. With “interactive exploration,” a user interface provides real-time feedback on how design changes will affect performance, like how the shape of a plane wing impacts air pressure distribution. With “automatic optimization,” you simply tell the system to give you a design with specific characteristics, like a drone that’s as lightweight as possible while still being able to carry the maximum amount of weight.

The reason it’s hard to optimize an object’s design is because of the massive size of the design space (the number of possible design options).

“It’s too data-intensive to compute every single point, so we have to come up with a way to predict any point in this space from just a small number of sampled data points,” says Schulz. “This is called ‘interpolation,’ and our key technical contribution is a new algorithm we developed to take these samples and estimate points in the space.”

Matusik says InstantCAD could be particularly helpful for more intricate designs for objects like cars, planes, and robots, particularly for industries like car manufacturing that care a lot about squeezing every little bit of performance out of a product.

“Our system doesn’t just save you time for changing designs, but has the potential to dramatically improve the quality of the products themselves,” says Matusik. “The more complex your design gets, the more important this kind of a tool can be.”

Because of the system’s productivity boosts and CAD integration, Schulz is confident that it will have immediate applications for industry. Down the line, she hopes that InstantCAD can also help lower the barrier for entry for casual users.

“In a world where 3-D printing and industrial robotics are making manufacturing more accessible, we need systems that make the actual design process more accessible, too,” Schulz says. “With systems like this that make it easier to customize objects to meet your specific needs, we hope to be paving the way to a new age of personal manufacturing and DIY design.”

The project was supported by the National Science Foundation.

The Drone Center’s Weekly Roundup: 7/24/17

The K5 security robot fell into a fountain in Washington, D.C.

July 17, 2017 – July 23, 2017

If you would like to receive the Weekly Roundup in your inbox, please subscribe at the bottom of the page.

News

A U.S. drone strike in Afghanistan is reported to have mistakenly killed 15 Afghan soldiers. In a statement, Afghanistan’s Ministry of Defense reported that the strike hit a security outpost in Helmand province. (Voice of America)

The U.K. Department of Transport is developing regulations that would implement a drone registration program, safety courses for drone owners, and more extensive geo-fencing to keep drones out of restricted areas. According to the BBC, it is not yet clear when the new rules will go into effect.

China announced plans to advance the development of artificial intelligence. The State Council of the People’s Republic of China released a plan to grow AI-related industries into a $59.07 billion sector by 2025. (Reuters)

Canada’s transportation safety agency issued an update to its drone regulations. The update relaxes key provisions for recreational and commercial drone users. (CTV News)

Commentary, Analysis, and Art

In testimony before the U.S. Senate, Air Force General Paul Selva argued that autonomous weapons should never be allowed to decide whether or not to take a human life. (Breaking Defense)

In a discussion at the National Governors Association annual meeting, Tesla CEO Elon Musk argued that artificial intelligence is a “fundamental existential risk for human civilization.” (Real Clear Politics)

At Wired, Tom Simonite argues that Musk’s comments are a distraction from the real problems of artificial intelligence.

A report by the Harvard Belfer Center for Science and International Affairs considers how artificial intelligence could revolutionize warfare. (Wired)

At the Wall Street Journal, Jeremy Page and Paul Sonne look at how China is stepping up exports of drones to U.S. allies.

At Scout Warrior, David Hambling looks at how separatists in eastern Ukraine are arming small drones with grenades and other ordnance.

A NASA study found that noise from drones is more irritating to people than noise from cars. (New Scientist)

At Popular Mechanics, Eric Tegler looks at how the U.S. Air Force is testing equipment intended for the Reaper drone on a World War II-era Douglas DC-3.

At War on the Rocks, Gregory C. Allen considers how animals in nature inspire the design of military robots.

Also at War on the Rocks, Scott Cuomo argues that the Marine Corps needs a persistent aerial surveillance and strike drone like the MQ-9 Reaper.

At the Intercept, Robert Trafford and Nick Turse write that Cameroonian forces used a drone base to torture suspected members of Boko Haram.

CNN looks at how Agadez, a city in northern Niger where the U.S. is building a drone base, is an unstable “tinderbox.”

At the Motley Fool, Rich Smith offers suggestions on drone manufacturers to invest in.

Know Your Drone

Researchers at MIT are developing small power-efficient computer chips that could be used to build highly autonomous micro-drones. (The Drive)

An Australian student has developed a drone that is capable of flying for longer and at much higher speeds than other consumer systems. (ABC News)

Airbus Defense and Space conducted a test flight of a subscale model of the Sagitta stealth drone that it is developing with a group of German research institutes. (Aviation Week)

A team at the Singapore University of Technology & Design has developed a vertical take-off drone that transitions to horizontal flight by turning its rotors into wings. (The Verge)

A team at Stanford University is developing a wormlike robot that can move by expanding in size. (Science Robotics)

The Chinese military is developing what appears to be an 8×8 unmanned supply truck. (IHS Jane’s Defence Weekly)

Russian defense firm Kronstadt unveiled the concept for its Orion-E, a military drone that will be about the size of the MQ-1 Predator. (FlightGlobal)

Meanwhile, Russian Helicopters unveiled the VRT3000, a co-axial rotor reconnaissance helicopter drone. (Shephard Media)

Estonian firm Threod Systems is developing a new variant of its Stream tactical military drone. (IHS Jane’s International Defence Review)

Researchers at MIT have developed a small robot that can swim through water pipes searching for leaks. (Dezeen)

YouTube channel Make it Extreme published a video showing how one can build a DIY counter-drone net gun. (Popular Mechanics)

Estonian Startup Marduk Technologies plans to begin testing its Shark counter-drone system with the Estonian military in August or September. (IHS Jane’s International Defence Review)

The U.S. Navy issued a draft Request for Proposals detailing some of the characteristics of its planned MQ-25A Stingray refueling drone. (USNI News)

Israeli firm GPSdome has developed a jam-resistant GPS system for drones. (C4ISRNET)

Ukraine-based defense firm Infocom revealed new details about its Laska armed unmanned ground vehicle. (IHS Jane’s International Defence Review)

Drones at Work

Singapore has offered the Philippines drones and other military equipment for operations against Islamist militant groups. (Reuters)

Police departments in Dorset, Cornwall, and Devon in the U.K. have been using drones to track reckless motorcyclists. (The Drive)

The town of Deadwood in South Dakota has approved an ordinance restricting the use of drones in the city. (Black Hills Pioneer)

Investigators have concluded that a mid-air collision in Australia that was thought to have been caused by a drone was actually caused by a bat. (ABC News)

In a test, the U.S. Navy used its Laser Weapon System to shoot down a drone. (CNN)

A remotely operated robot exploring the interior of Fukushima’s reactor 3 appears to have discovered objects that could be fuel debris. (Japan Times)

The U.S. Air Force has established a program to teach coalition forces how to react to adversary drones on the battlefield. (Unmanned Systems Technology)

A drone flying near the scene of a car crash in Avonport, Canada delayed the departure of a helicopter that was airlifting a patient to hospital. (CBC)

The police department of Harvey County, Kansas found a missing 91-year-old man by using a drone. (KWCH)

Canada’s OEX Recovery group will use a Kraken unmanned undersea vehicle to search for the remains of several subscale prototype jets that crashed into Lake Ontario in the 1950s. (Unmanned Systems Technology)

The U.S. Navy Special Warfare Command is testing a vehicle-mounted SkySafe counter-drone system. (GCN)

LG Electronics has begun a live trial of a series of cleaning and guide robots at Incheon International Airport in South Korea. (ZDNet)

Meanwhile, a Knightscope security robot fell into a fountain while patrolling an office building in Washington, D.C. (CNN)

A drone crashed while racing a Formula E electric race car during an event in Brooklyn. (The Drive)

Industry Intel

Counter-drone company SkySafe raised $11.5 million in a Series A funding round led by Andreessen Horowitz. (TechCrunch)

The U.S. Navy awarded Hydroid a $27.3 million contract modification for the Mk 18 Kingfish family of unmanned undersea vehicles. (DoD)

The U.S. Coast Guard awarded General Dynamics Mission Systems a $29,610 contract for an unmanned underwater vehicle. (FBO)

The U.S. Air Force awarded Engility Corporation a contract for “autonomous collaborative vehicle research and development.” (FBO)

The U.S. Navy awarded Insitu a $39,810 contract for software for the RQ-21A reconnaissance drone. (FBO)

The U.S. Army awarded Leonardo DRS and Moog a $16 million contract to develop a vehicle-mounted counter-drone system. (UPI)

The U.S. Department of Homeland Security awarded General Atomics Aeronautical Systems a $3.9 million contract for UAS operational support and maintenance. (USASpending)

The Defense Advanced Research Projects Agency awarded BAE Systems a $4.6 million contract under the Mobile Offboard Clandestine Communications Approach program. (USNI News)

Georgia’s Innovation Fund Tiny Grant program awarded the Long Cane Middle School an $8,000 grant to develop a curriculum on drones. (The Journal)

U.S. drone firm Measure will offer new inspection services to the solar energy industry. (Unmanned Systems Technology)

Propeller Aero will begin distributing Trimble’s Connected Site, a platform for analysis data from drones. (Unmanned Systems Technology)

Solent Local Enterprise Partnership awarded BAE Systems a $593,871 grant to design a testing site for autonomous systems in the U.K. (Inside Unmanned Systems)

The U.K.’s Ministry of Defense awarded Inzpire Limited a contract to help train pilots of the new General Atomics Protector drone. (Inzpire)

Iran Aircraft Manufacturing Industries announced that it will begin marketing the Hamaseh surveillance and strike drone to international customers. (FlightGlobal)

Israel Aerospace Industries will offer India an agreement to produce the Heron TP domestically. (FlightGlobal)

Israeli drone company Aeronautics will acquire an unnamed U.S. firm for $6 million. (IHS Jane’s Defense Weekly)

For updates, news, and commentary, follow us on Twitter.

Robots Podcast #239: Robot Academy, with Peter Corke



In this episode, Audrow Nash interviews Peter Corke, Professor of Robotics at the Queensland University of Technology and Director of the Australian Centre for Robotic Vision, about Robot Academy. Robot Academy is an online platform that provides free-to-use undergraduate-level learning resources for robotics and robotic vision.

The content was developed for two 6-week Massively Open Online Courses (MOOCs) that Corke taught in 2015 and 2016. This content is now available as individual lessons (over 200 videos, each less than 10 minutes long) or in masterclasses (collections of videos, around 1 hour in duration, previously a MOOC lecture). Unlike a MOOC, all lessons are available all the time.

While the content is typically designed for undergraduate-level students, around 20% of the lessons require no more than general knowledge. Each lesson is rated in terms of difficulty (on a 5-point scale), and Robot Academy references videos on Khan Academy to help students get up to speed to follow more advanced lessons.

 

Peter Corke

Peter Corke is Professor of Robotics and Control at the Queensland University of Technology leading the ARC Centre of Excellence for Robotic Vision in Australia. Previously he was a Senior Principal Research Scientist at the CSIRO ICT Centre where he founded and led the Autonomous Systems laboratory, the Sensors and Sensor Networks research theme and the Sensors and Sensor Networks Transformational Capability Platform. He is a Fellow of the IEEE. He was the Editor-in-Chief of the IEEE Robotics and Automation magazine; founding editor of the Journal of Field Robotics; member of the editorial board of the International Journal of Robotics Research, and the Springer STAR series. He has over 300 publications in the field and has held visiting positions at the University of Pennsylvania, University of Illinois at Urbana-Champaign, Carnegie-Mellon University Robotics Institute, and Oxford University.

 

Links

Multi-directional gravity assist harness helps rehabilitation

Credit: EPFL

When training to regain movement after stroke or spinal cord injury (SCI), patients must once again learn how to keep their balance during walking movements. Current clinical methods support the weight of the patient during movement, while setting the body off balance. This means that when patients are ready to walk without mechanical assistance, it can be hard to re-train the body to balance against gravity. This is the issue addressed in a recent paper published in Science Translational Medicine by a team lead by Courtine-Lab, and featuring Ijspeert Lab, NCCR Robotics and EPFL.

During walking, a combination of forces move the human body forward. In fact, the interaction of feet with the ground creates the majority of forward propulsion, but with every step, multiple muscles in the body are engaged to maintain movement and prevent falls. In order to fully regain the ability to walk, patients must develop both the muscles and the neural pathways required in these movements.

During partial body weight-supported gait therapy (whereby a patient trains on a treadmill while a robotic support system prevents them from falling), a patient is merely lifted upwards, with no support for forward or sideways movements, massively altering how the person within the support system moves. In fact, those within the training system use shorter steps, slower movements and less body rotation than the same people tested walking unaided.

In an effort to reduce these limitations of current therapy methods, the team developed a multi-directional gravity assist mechanism, meaning that the system supports patients not only in remaining upright, but also in moving forwards. This individually tailored support allows patients to walk in a natural and comfortable way, training the body to counterbalance against gravity and repositioning the torso in a natural position for walking.

The team developed a system, RYSEN, which allows patients to operate within a wide area, and in a range of activities, from standing and walking to walking along a slalom or horizontal ladder light projected onto the floor. They developed an algorithm to take measurements of how the patient is walking, and update the support given to them as they complete their training. The team found that all patients required the system to be tailored to them before use, but that by configuring the upward and forward forces applied during training, almost all subjects experienced significant improvements in movement with even small upward and forward forces on their torso. In fact, patients who experienced paralysis after SCI and stroke, found that by using the system, they were able to walk and thus begin to rebuild muscles and neurological pathways.

This work exists within a larger framework at NCCR Robotics, whereby researchers are using gravity-assisted technologies to play a key role in clinical trials on electrical spinal cord stimulation with the ultimate aim of creating technologies that will improve rehabilitation after spinal cord injury and stroke.

 

 

Reference:
Mignardot, J.-B., Le Goff, C. G., van den Brand, R., Capogrosso, M., Fumeaux, N., Vallery, H., Anil, S., Lanini, J., Fodor, I., Eberle, G., Ijspeert, A., Schurch, B., Curt, A., Carda, S., Bloch, J., von Zitzewitz, J. and Courtine, G., “A multidirectional gravity-assist algorithm that enhances locomotor control in patients with stroke or spinal cord injury“, Science Translational Medicine, 2017.

 

Image: EPFL

 

Federal regulations pass next hurdle

This week’s news is preliminary, but a U.S. house committee panel passed some new federal regulations which suggest sweeping change in the US regulatory approach to robocars.

Today, all cars sold must comply with the Federal Motor Vehicles Safety Standards (FMVSS). This is a huge set of standards, and it’s full of things written with human driven cars in mind, and making a radically different vehicle, like the Zoox, or the Waymo Firefly, or a delivery robot, is simply not going to happen under those standards. There is a provision where NHTSA can offer exemptions but it’s in small volumes, for prototype and testing vehicles mostly. The new rules would allow a vendor to get an exemption to make 100,000 vehicles per year, which should be enough for the early years of robocar deployment.

Secondly, these and other new regulations would preempt state regulations. Most players (except some states) have pushed for this. Many states don’t want the burden of regulating robocar design, since they don’t have the resources to do so, and most vendors don’t want what they call a “patchwork” of 50 regulations in the USA. My take is different. I agree the cost of a patchwork is not to be ignored, but the benefits of having jurisdictional competition may be much greater. When California proposed to ban vehicles like the Google Firefly, Texas immediately said, “Come to Texas, we won’t get in your way.” That pushed California to rethink. Having one regulation is good — but it has to be the right regulation, and we’re much too early in the game to know what the right regulation is.

This is just a committee in the house, and there is lots more distance to go, including the Senate and all the other usual hurdles. Whatever people thought about how much regulation there should be, everybody has known that the FMVSS needs a difficult and complex revision to work in the world of robocars, and a temporary exemption can be a solution to that.

New Horizon 2020 robotics projects, 2016: HEPHAESTUS

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). 

Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features HEPHAESTUS: Highly automatEd PHysical Achievements and performancES using cable roboTs Unique SysHighly automatEd PHysical Achievements and performancES using cable roboTs Unique Systemstems.

Objectives

Hephaestus project addresses novel concepts to introduce Robotics and Autonomous Systems use in the Construction Sector where the presence of this type of products is minor or almost non-existent. It focuses to give novel solutions to one of the most important parts of the construction sector, the part related to the facades and the works that need to be done when this part of a building is built or need maintenance. It proposes a new automatized way to install these products providing a whole solution not only highly industrialized in production but also in installation and maintenance.

Expected impact

Hephaestus aims at automating the On-site Execution or Installation process for empowering and strengthening the Construction Sector in Europe and for positioning the European Robotic Industry as leader and reference in the huge and new growing market for the robotics. Hephaestus solution will allow reducing up to 90% the number of work accidents during façade installation process, reducing around 20% of installation cost and around 44% of the annual maintenance and cleaning costs. Curtain wall construction currently accounts for an annual market of €30,000 million in Europe.

Partners

FUNDACIÓN TECNALIA R&I 
TECHNISCHE UNIVERSITÄT MÜNCHEN
FRAUNHOFER- IPA
CNRS-LIRMM 
CEMVISA VICINAY
NLINK AS 

Coordinator:

Coordinator: Julen Astudillo Larraz, TECNALIA
Julen.astudillo@tecnalia.com

Project website: www.hephaestus-project.eu

Watch all EU-projects videos

If you enjoyed reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Artificial intelligence suggests recipes based on food photos

Pic2Recipe, an artificial intelligence system developed at MIT, can take a photo of an entree and suggest a similar recipe to it. Photo: Jason Dorfman/MIT CSAIL

There are few things social media users love more than flooding their feeds with photos of food. Yet we seldom use these images for much more than a quick scroll on our cellphones. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analyzing photos like these could help us learn recipes and better understand people’s eating habits.

In a new paper with the Qatar Computing Research Institute (QCRI), the team trained an artificial intelligence system called Pic2Recipe to look at a photo of food and be able to predict the ingredients and suggest similar recipes.

“In computer vision, food is mostly neglected because we don’t have the large-scale datasets needed to make predictions,” says Yusuf Aytar, an MIT postdoc who co-wrote a paper about the system with MIT Professor Antonio Torralba. “But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences.”

 The paper will be presented later this month at the Computer Vision and Pattern Recognition conference in Honolulu. CSAIL graduate student Nick Hynes was lead author alongside Amaia Salvador of the Polytechnic University of Catalonia in Spain. Co-authors include CSAIL postdoc Javier Marin, as well as scientist Ferda Ofli and research director Ingmar Weber of QCRI.

How it works

The web has spurred a huge growth of research in the area of classifying food data, but the majority of it has used much smaller datasets, which often leads to major gaps in labeling foods.

In 2014 Swiss researchers created the “Food-101” dataset and used it to develop an algorithm that could recognize images of food with 50 percent accuracy. Future iterations only improved accuracy to about 80 percent, suggesting that the size of the dataset may be a limiting factor.

Even the larger datasets have often been somewhat limited in how well they generalize across populations. A database from the City University in Hong Kong has over 110,000 images and 65,000 recipes, each with ingredient lists and instructions, but only contains Chinese cuisine.

The CSAIL team’s project aims to build off of this work but dramatically expand in scope. Researchers combed websites like All Recipes and Food.com to develop “Recipe1M,” a database of over 1 million recipes that were annotated with information about the ingredients in a wide range of dishes. They then used that data to train a neural network to find patterns and make connections between the food images and the corresponding ingredients and recipes.

Given a photo of a food item, Pic2Recipe could identify ingredients like flour, eggs, and butter, and then suggest several recipes that it determined to be similar to images from the database. (The team has an online demo where people can upload their own food photos to test it out.)

“You can imagine people using this to track their daily nutrition, or to photograph their meal at a restaurant and know what’s needed to cook it at home later,” says Christoph Trattner, an assistant professor at MODUL University Vienna in the New Media Technology Department who was not involved in the paper. “The team’s approach works at a similar level to human judgement, which is remarkable.”

The system did particularly well with desserts like cookies or muffins, since that was a main theme in the database. However, it had difficulty determining ingredients for more ambiguous foods, like sushi rolls and smoothies.

It was also often stumped when there were similar recipes for the same dishes. For example, there are dozens of ways to make lasagna, so the team needed to make sure that system wouldn’t “penalize” recipes that are similar when trying to separate those that are different. (One way to solve this was by seeing if the ingredients in each are generally similar before comparing the recipes themselves).

In the future, the team hopes to be able to improve the system so that it can understand food in even more detail. This could mean being able to infer how a food is prepared (i.e. stewed versus diced) or distinguish different variations of foods, like mushrooms or onions.

The researchers are also interested in potentially developing the system into a “dinner aide” that could figure out what to cook given a dietary preference and a list of items in the fridge.

“This could potentially help people figure out what’s in their food when they don’t have explicit nutritional information,” says Hynes. “For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal.”

The project was funded, in part, by QCRI, as well as the European Regional Development Fund (ERDF) and the Spanish Ministry of Economy, Industry, and Competitiveness.

Robots to the rescue!

Emily – short for Emergency Integrated Lifesaving Lanyard – is a remote-controlled rescue boat used by lifeguards to save people’s life at sea (Photo: Hydrolanix – EMILY robot)

This article was first published on the IEC e-tech website.

Rapid advances in technology are revolutionizing the roles of aerial, terrestrial and maritime robotic systems in disaster relief, search and rescue (SAR) and salvage operations. Robots and drones can be deployed quickly in areas deemed too unsafe for humans and are used to guide rescuers, collect data, deliver essential supplies or provide communication services.

Well-established use

The first reported use of SAR robots was to explore the wreckage beneath the collapsed twin towers of the World Trade Center in New York after the September 2001 terrorist attacks. Drones and robots have been used to survey damage after disasters such as the Fukushima Daiichi nuclear power plant accident in Japan in 2011 and the earthquakes in Haiti (2010) and Nepal (2015). Up to now, more than 50 deployments of disaster robots have been documented throughout the world, according to the Texas-based Center for Robot‑Assisted Search & Rescue (CRASAR).

Robin Murphy, head of CRASAR and author of the book Disaster Roboticssays:

The impact of earthquakes, hurricanes, flooding […] is increasing, so the need for robots for all phases of a disaster, from prevention to response and recovery, will increase as well.

Eyes in the sky

Drones, also known as unmanned aerial vehicles (UAVs), can be used to detect and enter damaged buildings, assisting rescue robots and responders on the ground by speeding up the search for survivors through prioritizing which areas to search first. The more quickly SAR teams respond, the higher the survival rate is likely to be. Rescue drones create real-time maps by taking aerial surveys and send back photos, videos and sensor data to support damage assessments.

Drones used for SAR and disaster relief are most commonly powered by rechargeable batteries and are operated autonomously through onboard computers or by remote control. Their equipment typically comprises radar and laser scanners, multiple sensors and video and optical cameras as well as infrared cameras that are used to identify heat signatures of human bodies and other objects. This helps rescuers to locate survivors at night and in large, open environments and to identify hot spots from fires. Listening devices can pick up hard-to-hear audio, while Wi-Fi antennas and other attachments detect signals given off by mobile phones and plot a map that outlines the locations of victims.

New technologies in use or development for rescue drones and robots include ways of increasing survivor detection. Sensors scan areas for heartbeats and breathing, multisensor probes respond to odours or sounds and chemical sensors signal the presence of gases.

Standards put safety first

Much of the technology used in drones comes from commodity electronics developed for consumer essentials like mobile phones. Drones also require global positioning satellite (GPS) units, wireless transmitters, signal processors and microelectromechanical systems (MEMS). The flight controller also collects data from barometric pressure and airspeed sensors.

IEC International Standards produced by a range of IEC Technical Committees (TCs) and Subcommittees (SCs) cover the components of drones such as batteries, MEMS and other sensors, with an emphasis on safety and interoperability.

IEC TC 47: Semiconductor devices, and its SC 47F: Micro electromechanical systems, are responsible for compiling a wide range of International Standards for the semiconductor devices used in sensors and the MEMS essential to the safe operation of drone flights. These include accelerometers, altimeters, magnetometers (compasses), gyroscopes and pressure sensors. IEC TC 56: Dependability, covers the reliability of electronic components and equipment.

IEC TC 2: Rotating machinery, prepares International Standards covering specifications for rotating electrical machines, while IEC TC 91: Electronics assembly technology, is responsible for standards on electronic assembly technologies including components.

IEC SC 21A: Secondary cells and batteries containing alkaline or other non-acid electrolytes, compiles International Standards for batteries used in mobile applications, as well as for large-capacity lithium cells and batteries.

Ideal for isolated and remote hard-to-access areas

Using drones is useful not only when natural disasters make access by air, land, sea or road difficult, but also in isolated regions that lack accessible infrastructure. Recently, drones have started delivering medical supplies in areas where finding emergency healthcare is extremely difficult. In 2014, Médecins Sans Frontières piloted the use of drones to deliver vaccines and medicine in Papua New Guinea. In 2016, the US robotics company Ziplinelaunched a drone delivery service, in partnership with the government of Rwanda, to supply blood and medical supplies throughout the mountainous East African country. Zipline says its battery-powered drones can fly 120 km on a single charge to deliver medicine speedily, without the need for refrigeration or insulation.

Rwanda launches world’s first drone service to deliver blood to patients in remote areas of the country (Photo: Zipline)

A project by a company in the Netherlands to help refugees who get into difficulty in the Mediterranean Sea offers another example of drones being used for humanitarian purposes. Its search and rescue (SAR) drone is intended to fly over long distances, detect boats and drop life jackets, life buoys, food and medicine if necessary.

Currently only about a quarter of the world’s countries regulate the use of drones. Their deployment in disaster relief operations poses challenges involving regulatory issues, particularly when decisions are made on an ad hoc basis by local and national authorities. Humanitarian relief agencies also warn of the risks of relief drones being mistaken for military aircraft.

Two arms good, four arms better

Japan and the US lead the world in the development of rescue and disaster relief robots. Teams from both countries collaborated in recovery efforts after an earthquake and tsunami hit Japan in March 2011, causing a meltdown at the Fukushima nuclear power plant. While a Japanese team operated an eight-meter long snake-like robot fitted with a camera, the US contribution included two remote-controlled robots. The first was a lightweight 22 kg model previously used for bomb disposal and other military tasks before being reconfigured for disaster relief operations. The larger US model, capable of lifting up to 90 kg, was adapted from a device originally used for firefighting and clearing rubble.

Endeavor Robotics 510 Packbots are deployed in emergency situations when direct human intervention is dangerous or impossible, such as after industrial or nuclear accidents. These robots were used at the Fukushima nuclear power plant (Photo: Endeavor Robotics)

In 2017, Japanese researchers unveiled a prototype drone-robot combination for use in disaster relief work. It consisted of a vision-guided robot equipped with sensitive measuring systems, including force sensors, and a drone tethered to the robot. Four fish‑eye cameras mounted on the drone capture video of overhead views, allowing the robot’s operator to assess damage in the surrounding area.

Another Japanese rescue robot unveiled in 2017 is a multi-limbed robot that is 1,7 m tall, with four arms capable of independent operation and four caterpillar treads for mobility. Called The Octopus, it is capable of lifting 200 kg with each arm, crossing uneven terrain and lifting itself over obstacles with two arms while the other two clear debris.

Octopus, developed by Japan’s Waseda University a robot designed to clear rubble in disaster areas including nuclear facilities (Photo: Waseda University)

In the US, researchers are exploring ways in which a small and light collapsible, origami‑inspired robot, first produced by the National Aeronautics and Space Administration (NASA), could be adapted for use as a rescue robot. The device, known as PUFFER (Pop‑Up Flat Folding Explorer Robot) is designed to pack nearly flat for transport, and then re‑expand on site to explore tight nooks and crannies which are inaccessible to larger robots.

Over and underwater too

Given that 80% of the world’s population lives near water, maritime robotic vehicles can also play an important role in disaster relief by inspecting critical underwater infrastructure, mapping damage and identifying sources of pollution to harbours and fishing areas. Maritime robots helped to reopen ports and shipping channels in both Japan and Haiti after major earthquakes in 2011 and 2010 respectively.

In the Mediterranean, a battery‑powered robotic device first developed for use by lifeguards to rescue swimmers has been adapted to help rescue refugees crossing the Aegean Sea from Turkey. This maritime robot has a maximum cruising speed of 35 km/h and can function as a flotation device for 4 people.

Easy to find components and technology
Rescue robots use components and technology found in most other robots used for commercial purposes. Actuators and other electric motors, accelerometers, gyroscopes and dozens of sensors and cameras providing 360⁰ views enable these robots to maintain balance while moving over uneven ground covered with rubble or debris, and to get a sense of the environment around them.

A robot operating in a hazardous environment needs independent power and sensors for specific environments. It may be cut off from its human operator when communication signals are patchy. When remote operation guided by sensor data becomes impossible, a rescue robot needs the ability to make decisions on its own, using machine learning or other artificial intelligence (AI) algorithms.

Several IEC TCs and SCs cooperate on the development of International Standards for the broad range of electrotechnical systems, equipment and applications used in rescue robots. In addition to IEC TC 47: Semiconductor devices and IEC SC 47F: Microelectromechanical systems, mentioned above, other IEC TCs involved in standardization work for specific areas affecting rescue and disaster relief robots include IEC TC 44: Safety of machinery – Electrotechnical aspects; IEC TC 2: Rotating machinery; IEC TC 17: Switchgear and controlgear; and IEC TC 22: Power electronic systems and equipment.

Where humans fear to tread
The number of disasters recorded globally has doubled since the 1980s, with damage and losses estimated at an average of USD 100 billion a year since the start of the new millennium, according to the Overseas Development Institute (ODI), a UK think tank. This trend is likely to lead to increased demand for unmanned robotic devices that can assist disaster relief operations on land, in the air and at sea.

Robots of all kinds play a growing role in supporting SAR teams. Increasing autonomy will create more capable ground robots, while a combination of rapid technological advances and regulation should see the market for disaster relief drones soar over the next five years.

MarketsandMarkets, a research company, estimated in October 2016 that the total global market for drones, comprising commercial and military sales, would grow at a compound annual growth rate (CAGR) of nearly 20% between 2016 and 2022 to exceed USD 21 billion. Drones designed for humanitarian and disaster relief operations will account for 10% of the future drone market, according to the US‑based Association of Unmanned Vehicle Systems International (AUVSI).

FIRST global competition off to a rousing start with all teams getting visas

After much uproar, media attention, and political pressure, Pres. Trump intervened and enabled all the teams headed to Washington, DC for the F.I.R.S.T Global Robotics Championship, whose visas had been held up or denied, to get their visas. Some visas were received as late as two days before the event. Although the Afghan team got all the press, the team from Gambia was also denied when they first applied.

The three-day event which started Sunday evening in Washington, DC with opening ceremonies, has teams from 157 countries (including the Afghanistan and Gambia teams) — and some multinational teams representing continents. FIRST has organized competitions for many years but this is the first year it is hosting an international competition.

FIRST Global founder Dean Kamen, the inventor who created the Segway, said: “The competition’s objective is not just to teach children to build robots and explore careers in science, technology, engineering and math; it drives home the lesson of the importance of cooperation — across languages, cultures and borders. FIRST Global is getting them [teams from around the world] at a young age to learn how to communicate with each other, cooperate with each other and recognize that we’re all going to succeed together or we’re all going down together.” 

Ivanka Trump met with the Afghan girls team and also put in an appearance at the competition. She Tweeted: “It is a game everyone can play and where everyone can turn pro!”

The Washington Post described another set of problems that got resolved in an unusual way regarding the team from Iran:

Because of sanctions, FIRST Global was unable to ship a robotics kit to Iran, where a group of teenagers awaited the parts to build a robot. That might have spelled the end of the team’s shot of going to the world championships. But the organization introduced the Iranian team to a group of teenage robotics enthusiasts at George C. Marshall High School in Falls Church, Va., calling themselves Team Gryphon. The team in Iran sketched out blueprints on the computer and sent the designs to their counterparts across the ocean and then corresponded over Skype.

Sunday, the team flew the Iranian flag at their station next to the flag of Team Gryphon — a black flag with a purple silhouette of the gryphon — as a sign of their unlikely partnership.

For Mohammadreza Karami, the team’s mentor, it was an inspiring example of cooperation. “It’s possible to solve all of the world’s problems if we put aside our politics and focus on peace,” Karami said.

Kirsten Springer, a 16-year-old rising junior at Marshall High, said she didn’t want the Iranian team to be locked out of the competition just because of the sanctions. “Everybody should be able to compete … and to learn and to use that experience for other aspects of their life,” she said.

Now that all the teams are present and the competition has begun, we wish them all the best, hope they have a lot of fun, particularly hope they meet and befriend a lot of fellow robot enthusiasts, and thank them all for participating and their team mentors and supporters for helping make this all happen.

Bringing neural networks to cellphones

Image: Jose-Luis Olivares/MIT

In recent years, the best-performing artificial-intelligence systems — in areas such as autonomous driving, speech recognition, computer vision, and automatic translation — have come courtesy of software systems known as neural networks.

But neural networks take up a lot of memory and consume a lot of power, so they usually run on servers in the cloud, which receive data from desktop or mobile devices and then send back their analyses.

Last year, MIT associate professor of electrical engineering and computer science Vivienne Sze and colleagues unveiled a new, energy-efficient computer chip optimized for neural networks, which could enable powerful artificial-intelligence systems to run locally on mobile devices.

Now, Sze and her colleagues have approached the same problem from the opposite direction, with a battery of techniques for designing more energy-efficient neural networks. First, they developed an analytic method that can determine how much power a neural network will consume when run on a particular type of hardware. Then they used the method to evaluate new techniques for paring down neural networks so that they’ll run more efficiently on handheld devices.

The researchers describe the work in a paper they’re presenting next week at the Computer Vision and Pattern Recognition Conference. In the paper, they report that the methods offered as much as a 73 percent reduction in power consumption over the standard implementation of neural networks, and as much as a 43 percent reduction over the best previous method for paring the networks down.

Energy evaluator

Loosely based on the anatomy of the brain, neural networks consist of thousands or even millions of simple but densely interconnected information-processing nodes, usually organized into layers. Different types of networks vary according to their number of layers, the number of connections between the nodes, and the number of nodes in each layer.

The connections between nodes have “weights” associated with them, which determine how much a given node’s output will contribute to the next node’s computation. During training, in which the network is presented with examples of the computation it’s learning to perform, those weights are continually readjusted, until the output of the network’s last layer consistently corresponds with the result of the computation.

“The first thing we did was develop an energy-modeling tool that accounts for data movement, transactions, and data flow,” Sze says. “If you give it a network architecture and the value of its weights, it will tell you how much energy this neural network will take. One of the questions that people had is ‘Is it more energy efficient to have a shallow network and more weights or a deeper network with fewer weights?’ This tool gives us better intuition as to where the energy is going, so that an algorithm designer could have a better understanding and use this as feedback. The second thing we did is that, now that we know where the energy is actually going, we started to use this model to drive our design of energy-efficient neural networks.”

In the past, Sze explains, researchers attempting to reduce neural networks’ power consumption used a technique called “pruning.” Low-weight connections between nodes contribute very little to a neural network’s final output, so many of them can be safely eliminated, or pruned.

Principled pruning

With the aid of their energy model, Sze and her colleagues — first author Tien-Ju Yang and Yu-Hsin Chen, both graduate students in electrical engineering and computer science — varied this approach. Although cutting even a large number of low-weight connections can have little effect on a neural net’s output, cutting all of them probably would, so pruning techniques must have some mechanism for deciding when to stop.

The MIT researchers thus begin pruning those layers of the network that consume the most energy. That way, the cuts translate to the greatest possible energy savings. They call this method “energy-aware pruning.”

Weights in a neural network can be either positive or negative, so the researchers’ method also looks for cases in which connections with weights of opposite sign tend to cancel each other out. The inputs to a given node are the outputs of nodes in the layer below, multiplied by the weights of their connections. So the researchers’ method looks not only at the weights but also at the way the associated nodes handle training data. Only if groups of connections with positive and negative weights consistently offset each other can they be safely cut. This leads to more efficient networks with fewer connections than earlier pruning methods did.

“Recently, much activity in the deep-learning community has been directed toward development of efficient neural-network architectures for computationally constrained platforms,” says Hartwig Adam, the team lead for mobile vision at Google. “However, most of this research is focused on either reducing model size or computation, while for smartphones and many other devices energy consumption is of utmost importance because of battery usage and heat restrictions. This work is taking an innovative approach to CNN [convolutional neural net] architecture optimization that is directly guided by minimization of power consumption using a sophisticated new energy estimation tool, and it demonstrates large performance gains over computation-focused methods. I hope other researchers in the field will follow suit and adopt this general methodology to neural-network-model architecture design.”

Page 15 of 26
1 13 14 15 16 17 26