The Cocktail Bot 4.0 consists of five robots with one high-level goal: Mix one more than 20 possible drink combination for you! But it isn’t as easy as it sounds. After the customer composed his drink by combining liquor, soft drink and ice in a web interface. The robots start to mix the drink on their own. Five robot stations are preparing the order to deliver it to the guests.
The first robot, a Universal Robots UR5, takes a glass out of an industrial dishwasher rack. The challenge here is, that the glasses are placed upside down in the rack and have to be turned. Furthermore, there are two types of glasses – one for long drinks and one for shots like ‘whisky on the rocks’. The problem was mainly solved with the design of custom gripper fingers. They made it possible to grasp, turn and release the different types of glasses without an intermediate manipulation step. Also, some rubber bands increased the friction and made it possible to let the glass slide down smoothly on the belt. After releasing the glass, the glass tracking started to determine the exact pose.
To get to know the exact position of the glass on the conveyor belt an image processing pipeline calculated its pose. Especially, the transparency of the glass itself made it difficult to detect them reliably at every position. Otherwise the ice cubes or the liquor where not poured into the glass, but off target.
While the glass was placed on the center of the conveyor belt by the first robot, the second robot, a Schunk LWA 4P, started to fill its shovel with ice cubes out of an ice box. It is tricky as the ice cubes stick together after some time and they also change their form by melting. Again, a custom designed gripper guaranteed to get the right amount of ice cubes in each glass.
After ice was added the next step was to prepare the liquor. In total, there were four different kinds of shots – gin, whisky, rum and vodka. All of the liquors where in their original bottles and the third robot, a KUKA KR10 in combination with a Robotiq Three-Finger-Gripper, grasped them precisely. A special liquid nozzle made sure that only 4cl of liquor were poured in each glass after the robot placed the bottle opening above the glass. Pouring while following the movement of the glass made this process independent of liquid level or bottle type.
At the end of the first conveyor belt the fourth robot, again a UR5 with a Schunk PG70 gripper, waited for the arrival of the glass. If the guest just ordered a shot the glass was moved onto the second conveyor belt. Otherwise one of the soft drinks was added. Apart from sparkling and tap water, the taping system provided coke, tonic water, bitter lemon and orange juice. When the right amount of soft drink was added to the drink, the long drink glass was also placed on the other belt.
Only one part missing: The straw. While the fourth robot prepared the drink the fifth and biggest robot, a Universal Robots UR10 and a Weiss WSG-25 gripper, started to get a straw out of the straw dispenser standing next to it. After picking one, the arm moved to its waiting pose above the conveyor belt until the glass arrived. Again, custom designed gripper fingers made it possible to pick a straw out of the box as well as grasping the glass filled with liquids.
When the glass was within reach, the gripper released the straw into the glass and the arm approached nicely towards the glass to grasp it and place it on an interactive table. This was used to show the placed orders as well as the current drink making progress.
All the robots had to work synchronized, with almost no free space around them and close distance to the guests. The Robot Operating System (ROS) made it possible, to control all different kind of robotic arms and grippers within one high-level controller. Each robot station was triggered separately to increase the robustness and also the possibilities of extending the demonstrator for future parties.
The Cocktail Bot 4.0 was created and programmed by a small team of researchers from the FZI Research Center for Information Technologies in Karlsruhe, Germany.
Android textured with flag of china. Technology concept. Isolated
China has recently announced their long-term goal to become #1 in A.I. by 2030. They plan to grow their A.I. industry to over $22 billion by 2020, $59 billion by 2025 and $150 billion by 2030. They did this same type of long-term strategic planning for robotics – to make it an in-country industry and to transform the country from a low-cost labor source to a high-tech manufacturing resource, and it’s working.
China's Artificial Intelligence Manifesto
With this major strategic long-term push into A.I., China is looking to rival U.S. market leaders such as Alphabet/Google, Apple, Amazon, IBM and Microsoft. China is keen not to be left behind in a technology that is increasingly pivotal — from online commerce to self-driving vehicles, energy, and consumer products. China aims to catch up by solving issues including a lack of high-end computer chips, software that writes software, and trained personnel. Beijing will play a big role in policy support and regulation as well as providing and funding research, incentives and tax credits.
The local and central government are supporting this AI effort,” said Rui Yong, chief technology officer at PC maker Lenovo Group. “They see this trend coming and they want to invest more.
Many cited the defeat of the world's top Go players from China and South Korea by the Google-owned A.I. company DeepMind and their AlphaGo game-playing software as the event that caused China's State Council to enact and launch its A.I. plan which it announced on July 20th. The NY Times called it “a sort of Sputnik moment for China.”
Included in the announcement:
China will be investing heavily to ensure its companies, government and military leap to the front of the pack in a technology many think will one day form the basis of computing.
The plan covers almost every field: from using the technology for voice recognition to dispatching robots for deep-sea and Arctic exploration, as well as using AI in military security. The Council said the country must “firmly grasp this new stage of AI development.”
China said it plans to build “special-force” AI robots for ocean and Arctic exploration, use the technology for gathering evidence and reading court documents, and also use machines for “emotional interaction functions.”
In the final stage, by 2030, China will “become the world’s premier artificial intelligence innovation center,” which in turn will “foster a new national leadership and establish the key fundamentals for an economic great power.”
Chinese Investments in A.I.
The DoD regularly warns that Chinese money has been flowing into American A.I. companies — some of the same companies it says are likely to help the United States military develop future weapons systems. The NY Times cites the following example:
When the United States Air Force wanted help making military robots more perceptive, it turned to a Boston-based artificial intelligence start-up called Neurala. But when Neurala needed money, it got little response from the American military.
So Neurala turned to China, landing an undisclosed sum from an investment firm backed by a state-run Chinese company.
Chinese firms have become significant investors in American start-ups working on cutting-edge technologies with potential military applications. The start-ups include companies that make rocket engines for spacecraft, sensors for autonomous navy ships, and printers that make flexible screens that could be used in fighter-plane cockpits. Many of the Chinese firms are owned by state-owned companies or have connections to Chinese leaders.
Chinese venture firms have offices in Silicon Valley, Boston and other areas where A.I. startups are happening. Many Chinese companies — such as Baidu — have American-based research centers to take advantage of local talent.
The Committee on Foreign Investment in the United States (CFIUS), which reviews U.S. acquisitions by foreign entities for national security risks, appears to be blind to all of this.
China's Robot Manifesto Has Been Quite Successful
Chinese President Xi Jinping initiated “a robot revolution” and launched the “Made in China 2025” program. More than 1,000 firms and a new robotics association, CRIA (Chinese Robotics Industry Alliance) have emerged (or begun to transition) into robotics to take advantage of the program. By contrast, the sector was virtually non-existent a decade ago.
Under “Made in China 2025,” and the five-year robot plan launched last April, Beijing is focusing on automating key sectors of the economy including car manufacturing, electronics, home appliances, logistics, and food production. At the same time, the government wants to increase the share of in-country-produced robots to more than 50% by 2020; up from 31% last year and to be able to make 150,000 industrial robots in 2020; 260,000 in 2025; and 400,000 by 2030. China's stated goal in both their 5-year plan and Made in China 2025 program is to overtake Germany, Japan, and the United States in terms of manufacturing sophistication by 2049, the 100th anniversary of the founding of the People’s Republic of China. To make that happen, the government needs Chinese manufacturers to adopt robots by the millions. It also wants Chinese companies to start producing more of those robots and has encouraged strategic acquisitions.
Four of the top 15 acquisitions in 2016 were of robotic-related companies by Chinese acquirers:
Midea, a Chinese consumer products manufacturer, acquired KUKA, one of the Big 4 global robot manufacturers
The Kion Group, a predominately Chinese-funded warehousing systems and equipment conglomerate, acquired Dematic, a large European AGV and material handling systems company
KraussMaffei, a big German industrial robots integrator, was acquired by ChemChina
Paslin, a US-based industrial robot integrator, was acquired by Zhejiang Wanfeng Technology, a Chinese industrial robot integrator
Singapore and MIT have been at the forefront of autonomous vehicle development. First, there were self-driving golf buggies. Then, an autonomous electric car. Now, leveraging similar technology, MIT and Singaporean researchers have developed and deployed a self-driving wheelchair at a hospital.
Spearheaded by Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of MIT’s Computer Science and Artificial Intelligence Laboratory, this autonomous wheelchair is an extension of the self-driving scooter that launched at MIT last year — and it is a testament to the success of the Singapore-MIT Alliance for Research and Technology, or SMART, a collaboration between researchers at MIT and in Singapore.
Rus, who is also the principal investigator of the SMART Future Urban Mobility research group, says this newest innovation can help nurses focus more on patient care as they can get relief from logistics work which includes searching for wheelchairs and wheeling patients in the complex hospital network.
“When we visited several retirement communities, we realized that the quality of life is dependent on mobility. We want to make it really easy for people to move around,” Rus says.
A magnetic folding robot arm can grasp and bend thanks to its pattern of origami-inspired folds and a wireless electromagnetic field. Credit: Wyss Institute at Harvard University
The traditional Japanese art of origami transforms a simple sheet of paper into complex, three-dimensional shapes through a very specific pattern of folds, creases, and crimps. Folding robots based on that principle have emerged as an exciting new frontier of robotic design, but generally require onboard batteries or a wired connection to a power source, making them bulkier and clunkier than their paper inspiration and limiting their functionality.
A team of researchers at the Wyss Institute for Biologically Inspired Engineering and the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University has created battery-free folding robots that are capable of complex, repeatable movements powered and controlled through a wireless magnetic field.
“Like origami, one of the main points of our design is simplicity,” says co-author Je-sung Koh, Ph.D., who conducted the research as a Postdoctoral Fellow at the Wyss Institute and SEAS and is now an Assistant Professor at Ajou University in South Korea. “This system requires only basic, passive electronic components on the robot to deliver an electric current—the structure of the robot itself takes care of the rest.”
The research team’s robots are flat and thin (resembling the paper on which they’re based) plastic tetrahedrons, with the three outer triangles connected to the central triangle by hinges, and a small circuit on the central triangle. Attached to the hinges are coils made of a type of metal called shape-memory alloy (SMA) that can recover its original shape after deformation by being heated to a certain temperature. When the robot’s hinges lie flat, the SMA coils are stretched out in their “deformed” state; when an electric current is passed through the circuit and the coils heat up, they spring back to their original, relaxed state, contracting like tiny muscles and folding the robots’ outer triangles in toward the center. When the current stops, the SMA coils are stretched back out due to the stiffness of the flexure hinge, thus lowering the outer triangles back down.
The power that creates the electrical current needed for the robots’ movement is delivered wirelessly using electromagnetic power transmission, the same technology inside wireless charging pads that recharge the batteries in cell phones and other small electronics. An external coil with its own power source generates a magnetic field, which induces a current in the circuits in the robot, thus heating the SMA coils and inducing folding. In order to control which coils contract, the team built a resonator into each coil unit and tuned it to respond only to a very specific electromagnetic frequency. By changing the frequency of the external magnetic field, they were able to induce each SMA coil to contract independently from the others.
“Not only are our robots’ folding motions repeatable, we can control when and where they happen, which enables more complex movements,” explains lead author Mustafa Boyvat, Ph.D., also a Postdoctoral Fellow at the Wyss Institute and SEAS.
Just like the muscles in the human body, the SMA coils can only contract and relax: it’s the structure of the body of the robot — the origami “joints” — that translates those contractions into specific movements. To demonstrate this capability, the team built a small robotic arm capable of bending to the left and right, as well as opening and closing a gripper around an object. The arm is constructed with a special origami-like pattern to permit it to bend when force is applied, and two SMA coils deliver that force when activated while a third coil pulls the gripper open. By changing the frequency of the magnetic field generated by the external coil, the team was able to control the robot’s bending and gripping motions independently.
There are many applications for this kind of minimalist robotic technology; for example, rather than having an uncomfortable endoscope put down their throat to assist a doctor with surgery, a patient could just swallow a micro-robot that could move around and perform simple tasks, like holding tissue or filming, powered by a coil outside their body. Using a much larger source coil — on the order of yards in diameter — could enable wireless, battery-free communication between multiple “smart” objects in an entire home. The team built a variety of robots — from a quarter-sized flat tetrahedral robot to a hand-sized ship robot made of folded paper — to show that their technology can accommodate a variety of circuit designs and successfully scale for devices large and small. “There is still room for miniaturization. We don’t think we went to the limit of how small these can be, and we’re excited to further develop our designs for biomedical applications,” Boyvat says.
“When people make micro-robots, the question is always asked, ‘How can you put a battery on a robot that small?’ This technology gives a great answer to that question by turning it on its head: you don’t need to put a battery on it, you can power it in a different way,” says corresponding author Rob Wood, Ph.D., a Core Faculty member at the Wyss Institute who co-leads its Bioinspired Robotics Platform and the Charles River Professor of Engineering and Applied Sciences at SEAS.
“Medical devices today are commonly limited by the size of the batteries that power them, whereas these remotely powered origami robots can break through that size barrier and potentially offer entirely new, minimally invasive approaches for medicine and surgery in the future,” says Wyss Founding Director Donald Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as a Professor of Bioengineering at Harvard’s School of Engineering and Applied Sciences.
Imagine rescuers searching for people in the rubble of a collapsed building. Instead of digging through the debris by hand or having dogs sniff for signs of life, they bring out a small, air-tight cylinder. They place the device at the entrance of the debris and flip a switch. From one end of the cylinder, a tendril extends into the mass of stones and dirt, like a fast-climbing vine. A camera at the tip of the tendril gives rescuers a view of the otherwise unreachable places beneath the rubble.
This is just one possible application of a new type of robot created by mechanical engineers at Stanford University, detailed in a June 19 Science Robotics paper. Inspired by natural organisms that cover distance by growing — such as vines, fungi and nerve cells — the researchers have made a proof of concept of their soft, growing robot and have run it through some challenging tests.
“Essentially, we’re trying to understand the fundamentals of this new approach to getting mobility or movement out of a mechanism,” explained Allison Okamura, professor of mechanical engineering and senior author of the paper. “It’s very, very different from the way that animals or people get around the world.”
To investigate what their robot can do, the group created prototypes that move through various obstacles, travel toward a designated goal, and grow into a free-standing structure. This robot could serve a wide range of purposes, particularly in the realms of search and rescue and medical devices, the researchers said.
A growing robot
The basic idea behind this robot is straightforward. It’s a tube of soft material folded inside itself, like an inside-out sock, that grows in one direction when the material at the front of the tube everts, as the tube becomes right-side-out. In the prototypes, the material was a thin, cheap plastic and the robot body everted when the scientists pumped pressurized air into the stationary end. In other versions, fluid could replace the pressurized air.
What makes this robot design extremely useful is that the design results in movement of the tip without movement of the body.
“The body lengthens as the material extends from the end but the rest of the body doesn’t move,” explained Elliot Hawkes, a visiting assistant professor from the University of California, Santa Barbara and lead author of the paper. “The body can be stuck to the environment or jammed between rocks, but that doesn’t stop the robot because the tip can continue to progress as new material is added to the end.”
Graduate students Joseph Greer, left, and Laura Blumenschein, right, work with Elliot Hawkes, a visiting assistant professor from the University of California, Santa Barbara, on a prototype of the vinebot. (Image credit: L.A. Cicero)
The group tested the benefits of this method for getting the robot from one place to another in several ways. It grew through an obstacle course, where it traveled over flypaper, sticky glue and nails and up an ice wall to deliver a sensor, which could potentially sense carbon dioxide produced by trapped survivors. It successfully completed this course even though it was punctured by the nails because the area that was punctured didn’t continue to move and, as a result, self-sealed by staying on top of the nail.
In other demonstrations, the robot lifted a 100-kilogram crate, grew under a door gap that was 10 percent of its diameter and spiraled on itself to form a free-standing structure that then sent out a radio signal. The robot also maneuvered through the space above a dropped ceiling, which showed how it was able to navigate unknown obstacles as a robot like this might have to do in walls, under roads or inside pipes. Further, it pulled a cable through its body while growing above the dropped ceiling, offering a new method for routing wires in tight spaces.
Difficult environments
“The applications we’re focusing on are those where the robot moves through a difficult environment, where the features are unpredictable and there are unknown spaces,” said Laura Blumenschein, a graduate student in the Okamura lab and co-author of the paper. “If you can put a robot in these environments and it’s unaffected by the obstacles while it’s moving, you don’t need to worry about it getting damaged or stuck as it explores.”
Some iterations of these robots included a control system that differentially inflated the body, which made the robot turn right or left. The researchers developed a software system that based direction decisions on images coming in from a camera at the tip of the robot.
A primary advantage of soft robots is that they can be safer than hard, rigid robots not only because they are soft but also because they are often lightweight. This is especially useful in situations where a robot could be moving in close quarters with a person. Another benefit, in the case of this robot, is that it is flexible and can follow complicated paths. This, however, also poses some challenges.
Joey Greer, a graduate student in the Okamura lab and co-author of the paper, said that controlling a robot requires a precise model of its motion, which is difficult to establish for a soft robot. Rigid robots, by comparison, are much easier to model and control, but are unusable in many situations where flexibility or safety is necessary. “Also, using a camera to guide the robot to a target is a difficult problem because the camera imagery needs to be processed at the rate it is produced. A lot of work went into designing algorithms that both ran fast and produced results that were accurate enough for controlling the soft robot,” Greer said.
Going big—and small
As it exists now, the scientists built the prototype by hand and it is powered through pneumatic air pressure. In the future, the researchers would like to create a version that would be manufactured automatically. Future versions may also grow using liquid, which could help deliver water to people trapped in tight spaces or to put out fires in closed rooms. They are also exploring new, tougher materials, like rip-stop nylon and Kevlar.
The vinebot is a tube of soft material that grows in one direction. (Image credit: L.A. Cicero)
The researchers also hope to scale the robot much larger and much smaller to see how it performs. They’ve already created a 1.8 mm version and believe small growing robots could advance medical procedures. In place of a tube that is pushed through the body, this type of soft robot would grow without dragging along delicate structures.
Okamura is a member of Stanford Bio-X and the Stanford Neurosciences Institute.
This research was funded by the National Science Foundation.
Recent events demonstrate the growing presence of indoor mobile robots: (1) Savioke’s hotel butler robot won the 2017 IERA inventors award; (2) Knightscope’s security robot mistook a reflecting pond for a solid floor and dove in face-first to the delight of Twitterdom and the media; and (3) the sale of robotic hospital delivery provider Aethon to a Singaporean conglomerate.
Are we beginning to enter an era of multi-functional robots? Certainly that is the vision of each of the vendors listed below. They see their robots greet, assist and run errands during business hours and then, after hours, prowl and tally inventory and fixed assets, and all the while – 24/7 – check for anomalies and things that are suspicious. SuperRobot? Or one of the many new mobile service robots that offer each of these services as separate tasks? For example, Savioke, the hotel butler robot, is now using their Relay robots with FedEx in the warehousing and logistics sector.
The indoor robot marketplace
Travis Deyle, CEO of Silicon Valley startup Cobalt Robotics which is developing indoor robots for security purposes, in an article in IEEE Spectrum, posited that commercial spaces are the next big marketplace for robotics and that there’s a massive, untapped market in each of the commercial spaces shown in his chart below:
“Commercial spaces could serve as a great stepping stone on the path toward general-purpose home robots by driving scale, volume, and capabilities. So… while billions are being spent on R&D for autonomous vehicles, indoor robots for commercial and public spaces reap the technology and cost benefits on sensors, computing, machine learning, and open-source software.”
Although the chart above focuses on the many applications within the commercial space, there is also much activity in various forms of indoor material handling using mobile robots in warehouses and distribution centers. The list of companies in that marketplace is quite large and will be detailed in a future article.
Hospital mobile robot firm sells to Singaporean conglomerate
ST Engineering acquired Pittsburgh, PA-based hospital robotics firm Aethon for $36 million. Aethon provides intralogistics in hospital environments by delivering goods and supplies using its TUG autonomous mobile robots. ST Engineering's strategic reasoning for the acquisition can be understood by this statement about the purchase:
“We evaluated the autonomous mobile robotics market thoroughly. Our evaluation led us to conclude that Aethon was the best company in this space having the right technology along with proven success in the commercialization and installation of autonomous mobile robots,” said Khee Loon Foo, General Manager, Kinetics Advanced Robotics of ST Kinetics.
Hotel robot wins 2017 IERA Inventors Award
The International Federation of Robotics (IFR) and the IEEE Robotics and Automation Society (IEEE/RAS) jointly sponsor an annual IERA (Innovation and Entrepreneurship in Robotics and Automation) Award which this year was presented to the Relay butler robot made by Savioke, a Silicon Valley startup.
Savioke's Relay robot makes deliveries all on its own in hotels, hospitals or logistics centers. Thanks to artificial intelligence and sensor technology, the robot can move safely through public spaces and navigate around people and obstacles dynamically.
The robots, which have already completed over 100,000 deliveries, can be seen in selected hotels in California and New York, Asia and the Middle East.
Indoor Robot Companies
Listed below are a few of the companies in the emerging mobile robot indoor commercial marketplaces described in Deyle's chart above. The list is not comprehensive but intended to give you an overview of who those new companies are, how far along they are, and how global they are.
Indoor Security Robots:
Recent research reports covering the security robots marketplace forecast that the market will reach $2.4 billion by 2022 at a CAGR of 9% from now til then. These forecasts include indoor and outdoor robots.
Knightscope is a Silicon Valley security robot startup with robots in shopping malls, exhibition halls, parking lots and office complexes. It was Knightscope's robot that took the face dive in the Washington, DC pond. [Graphic of Knightscope robot from Twitter.]
Cobalt Robotics, also a Silicon Valley security robotics startup, but, as described by co-founder Travis Deyle, “Security is just one entrée to the whole emerging world of indoor robotics.”
Gamma Two Robotics, a Colorado patrol robot maker, whose new Ramsee mobile robots have sensors for heat, toxic gas, motion detection and acoustic listening.
NxT Robotics is a San Diego mobile robot startup offering both an indoor (Iris) and outdoor (Scorpion) security patrol solution.
SMP Robotics is a San Francisco maker of mobile security robots for outdoor and indoor facilities.
Anbot(Hunan Wanwei Intelligent Robot Technology Co.) is a Chinese security robot with a robot very similar looking to Knightscope's. It can be seen prowling airport and museum public spaces in China.
Robot Security Systems is a Netherlands-based startup indoor mobile security robot provider.
Indoor Guides, Assistants, Greeters, Food Handlers and Gofor Robots:
This list could be much larger – particularly the gofor robots in the material handling field – but has been limited to those startups delivering product or with working prototypes focused on one or all of the commercial indoor market sectors.
MetraLabs is a German provider of fully autonomous mobile inventory, public space guide and retail robots for stores, malls and museums.
PAL Robotics is a Spanish maker of humanoid robots used as guides, entertainers, information providers and presenters – in multiple languages.
Pangolin Robot is a Chinese maker of restaurant server/waiter/busing robots and which also has a line of greeting and delivery robots.
Simbe Roboticsis a San Francisco provider of a retail space inventory robot auditing shelves for out-of-stock items, low stock items, misplaced items, and pricing errors. Simbe's Tally robot can perform during normal store hours alongside shoppers and employees or autonomously after hours.
Bossa Nova Roboticsis a Pittsburgh developer of a store robot that scans products on the shelves, makes store maps and helps employees keep track of where items are located.
BlueBotics is a Swiss provider of mobile robots, robotic platforms and products for mobile guides, marketing assistants and industrial cleaning.
Pepper, the mobile emotion-detecting robot jointly produced by Foxconn, Alibaba and SoftBank, is serving as the first point of contact in coffee stores, banks, corporate offices and other public spaces.
Yujin Robot is a Korean consumer products maker with a line of hotel and restaurant delivery robots.
Fellow Robots is a Silicon Valley developer of the NAVii robot which is used as a greeter but also maps and performs inventory scans.
Singapore Technologies Engineering Ltd (ST Engineering) has acquired Pittsburgh, PA-based robotics firm Aethon Inc through Vision Technologies Land Systems, Inc. (VTLS), and its wholly-owned subsidiary, VT Robotics, Inc, for $36 million.
The acquisition will be carried out by way of a merger with VT Robotics, a special newly incorporated entity established for the transaction. The merger will see Aethon as the surviving entity that will operate as a subsidiary of VTLS, and will be part of the the ST Group’s Land Systems sector. Aethon’s leadership team and employees will remain in place and the company will continue to operate out of its Pittsburgh, PA location.
ST Engineering, S63 on the Singapore Stock Exchange, is a Singapore-based integrated defense and engineering group focused in aerospace, electronics, and land, sea and air unmanned systems for the battlefield. It employs over 21,000 people and has annual revenues of around $5 billion.
“We evaluated the autonomous mobile robotics market thoroughly. Our evaluation led us to conclude that Aethon was the best company in this space having the right technology along with proven success in the commercialization and installation of autonomous mobile robots. We look forward to working with the Pittsburgh, PA team to grow the company,” says Khee Loon Foo, General Manager, Kinetics Advanced Robotics of ST Kinetics.
Aethon provides intralogistics in manufacturing and hospital environments by delivering goods and supplies using its TUG autonomous mobile robots. TUGs are self-driving autonomous robots capable of hauling or towing up to 1,400 lbs as it dynamically and safely navigates around people and the corridors of client facilities.
“This acquisition is a terrific event for our company, employees and our customers since it provides Aethon with the resources and corporate backing to grow and develop new innovative robotic technology and more aggressively pursue new markets. We will now be able to expand our development capabilities to enhance our current technology and bring exciting logistics solutions to new vertical and global markets,” says Aldo Zini, CEO of Aethon.
It comes down to the question of what a robot really is. While science fiction has often portrayed robots as androids carrying out tasks in the much the same way as humans, the reality is that robots take much more specialised forms. Traditional 20th century robots were automated machines and robotic arms building cars in factories. Commercial 21st century robots are supermarket self-checkouts, automated guided warehouse vehicles, and even burger-flipping machines in fast-food restaurants.
Ultimately, humans haven’t become completely redundant because these robots may be very efficient but they’re also kind of dumb. They do not think, they just act, in very accurate but very limited ways. Humans are still needed to work around robots, doing the jobs the machines can’t and fixing them when they get stuck. But this is all set to change thanks to a new wave of smarter, better value machines that can adapt to multiple tasks. This change will be so significant that it will create a new industrial revolution.
This era of “Industry 4.0” is being driven by the same technological advances that enable the capabilities of the smartphones in our pockets. It is a mix of low-cost and high-power computers, high-speed communication and artificial intelligence. This will produce smarter robots with better sensing and communication abilities that can adapt to different tasks, and even coordinate their work to meet demand without the input of humans.
In the manufacturing industry, where robots have arguably made the most headway of any sector, this will mean a dramatic shift from centralised to decentralised collaborative production. Traditional robots focused on single, fixed, high-speed operations and required a highly skilled human workforce to operate and maintain them. Industry 4.0 machines are flexible, collaborative and can operate more independently, which ultimately removes the need for a highly skilled workforce.
For large-scale manufacturers, Industry 4.0 means their robots will be able to sense their environment and communicate in an industrial network that can be run and monitored remotely. Each machine will produce large amounts of data that can be collectively studied using what is known as “big data” analysis. This will help identify ways to improve operating performance and production quality across the whole plant, for example by better predicting when maintenance is needed and automatically scheduling it.
For small-to-medium manufacturing businesses, Industry 4.0 will make it cheaper and easier to use robots. It will create machines that can be reconfigured to perform multiple jobs and adjusted to work on a more diverse product range and different production volumes. This sector is already beginning to benefit from reconfigurable robots designed to collaborate with human workers and analyse their own work to look for improvements, such as BAXTER, SR-TEX and CareSelect.
While these machines are getting smarter, they are still not as smart as us. Today’s industrial artificial intelligence operates at a narrow level, which gives the appearance of human intelligence exhibited by machines, but designed by humans.
What’s coming next is known as “deep learning”. Similar to big data analysis, it involves processing large quantities of data in real time to make decisions about what is the best action to take. The difference is that the machine learns from the data so it can improve its decision making. A perfect example of deep learning was demonstrated by Google’s AlphaGo software, which taught itself to beat the world’s greatest Go players.
The turning point in applying artificial intelligence to manufacturing could come with the application of special microchips called graphical processing units (GPUs). These enable deep learning to be applied to extremely large data sets at extremely fast speeds. But there is still some way to go and big industrial companies are recruiting vast numbers of scientists to further develop the technology.
Impact on industry
As Industry 4.0 technology becomes smarter and more widely available, manufacturers of any size will be able to deploy cost-effective, multipurpose and collaborative machines as standard. This will lead to industrial growth and market competitiveness, with a greater understanding of production processes leading to new high-quality products and digital services.
Exactly what impact a smarter robotic workforce with the potential to operate on its own will have on the manufacturing industry, is still widely disputed. Artificial intelligence as we know it from science fiction is still in its infancy. It could well be the 22nd century before robots really have the potential to make human labour obsolete by developing not just deep learning but true artificial understanding that mimics human thinking.
Ideally, Industry 4.0 will enable human workers to achieve more in their jobs by removing repetitive tasks and giving them better robotic tools. In theory, this would allow us humans to focus more on business development, creativity and science, which it would be much harder for any robot to do. Technology that has made humans redundant in the past has forced us to adapt, generally with more education.
But because Industry 4.0 robots will be able to operate largely on their own, we might see much greater human redundancy from manufacturing jobs without other sectors being able to create enough new work. Then we might see more political moves to protect human labour, such as taxing robots.
Again, in an ideal scenario, humans may be able to focus on doing the things that make us human, perhaps fuelled by a basic income generated from robotic work. Ultimately, it will be up to us to define whether the robotic workforce will work for us, with us, or against us.
Adriana Schulz, an MIT PhD student in the Computer Science and Artificial Intelligence Laboratory, demonstrates the InstantCAD computer-aided-design-optimizing interface. Photo: Rachel Gordon/MIT CSAIL
Almost every object we use is developed with computer-aided design (CAD). Ironically, while CAD programs are good for creating designs, using them is actually very difficult and time-consuming if you’re trying to improve an existing design to make the most optimal product. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Columbia University are trying to make the process faster and easier: In a new paper, they’ve developed InstantCAD, a tool that lets designers interactively edit, improve, and optimize CAD models using a more streamlined and intuitive workflow.
InstantCAD integrates seamlessly with existing CAD programs as a plug-in, meaning that designers don’t have to learn new tools to use it.
“From more ergonomic desks to higher-performance cars, this is really about creating better products in less time,” says Department of Electrical Engineering and Computer Science PhD student and lead author Adriana Schulz, who will be presenting the paper at this month’s SIGGRAPH computer-graphics conference in Los Angeles. “We think this could be a real game changer for automakers and other companies that want to be able to test and improve complex designs in a matter of seconds to minutes, instead of hours to days.”
The paper was co-written by Associate Professor Wojciech Matusik, PhD student Jie Xu, and postdoc Bo Zhu of CSAIL, as well as Associate Professor Eitan Grinspun and Assistant Professor Changxi Zheng of Columbia University.
Traditional CAD systems are “parametric,” which means that when engineers design models, they can change properties like shape and size (“parameters”) based on different priorities. For example, when designing a wind turbine you might have to make trade-offs between how much airflow you can get versus how much energy it will generate.
InstantCAD enables designers to interactively edit, improve, and optimize CAD models using a more streamlined and intuitive workflow. Photo: Rachel Gordon/MIT CSAIL
However, it can be difficult to determine the absolute best design for what you want your object to do, because there are many different options for modifying the design. On top of that, the process is time-consuming because changing a single property means having to wait to regenerate the new design, run a simulation, see the result, and then figure out what to do next.
With InstantCAD, the process of improving and optimizing the design can be done in real-time, saving engineers days or weeks. After an object is designed in a commercial CAD program, it is sent to a cloud platform where multiple geometric evaluations and simulations are run at the same time.
With this precomputed data, you can instantly improve and optimize the design in two ways. With “interactive exploration,” a user interface provides real-time feedback on how design changes will affect performance, like how the shape of a plane wing impacts air pressure distribution. With “automatic optimization,” you simply tell the system to give you a design with specific characteristics, like a drone that’s as lightweight as possible while still being able to carry the maximum amount of weight.
The reason it’s hard to optimize an object’s design is because of the massive size of the design space (the number of possible design options).
“It’s too data-intensive to compute every single point, so we have to come up with a way to predict any point in this space from just a small number of sampled data points,” says Schulz. “This is called ‘interpolation,’ and our key technical contribution is a new algorithm we developed to take these samples and estimate points in the space.”
Matusik says InstantCAD could be particularly helpful for more intricate designs for objects like cars, planes, and robots, particularly for industries like car manufacturing that care a lot about squeezing every little bit of performance out of a product.
“Our system doesn’t just save you time for changing designs, but has the potential to dramatically improve the quality of the products themselves,” says Matusik. “The more complex your design gets, the more important this kind of a tool can be.”
Because of the system’s productivity boosts and CAD integration, Schulz is confident that it will have immediate applications for industry. Down the line, she hopes that InstantCAD can also help lower the barrier for entry for casual users.
“In a world where 3-D printing and industrial robotics are making manufacturing more accessible, we need systems that make the actual design process more accessible, too,” Schulz says. “With systems like this that make it easier to customize objects to meet your specific needs, we hope to be paving the way to a new age of personal manufacturing and DIY design.”
The project was supported by the National Science Foundation.
The K5 security robot fell into a fountain in Washington, D.C.
July 17, 2017 – July 23, 2017
If you would like to receive the Weekly Roundup in your inbox, please subscribe at the bottom of the page.
News
A U.S. drone strike in Afghanistan is reported to have mistakenly killed 15 Afghan soldiers. In a statement, Afghanistan’s Ministry of Defense reported that the strike hit a security outpost in Helmand province. (Voice of America)
The U.K. Department of Transport is developing regulations that would implement a drone registration program, safety courses for drone owners, and more extensive geo-fencing to keep drones out of restricted areas. According to the BBC, it is not yet clear when the new rules will go into effect.
China announced plans to advance the development of artificial intelligence. The State Council of the People’s Republic of China released a plan to grow AI-related industries into a $59.07 billion sector by 2025. (Reuters)
Canada’s transportation safety agency issued an update to its drone regulations. The update relaxes key provisions for recreational and commercial drone users. (CTV News)
An Australian student has developed a drone that is capable of flying for longer and at much higher speeds than other consumer systems. (ABC News)
Airbus Defense and Space conducted a test flight of a subscale model of the Sagitta stealth drone that it is developing with a group of German research institutes. (Aviation Week)
YouTube channel Make it Extreme published a video showing how one can build a DIY counter-drone net gun. (Popular Mechanics)
Estonian Startup Marduk Technologies plans to begin testing its Shark counter-drone system with the Estonian military in August or September. (IHS Jane’s International Defence Review)
The U.S. Navy issued a draft Request for Proposals detailing some of the characteristics of its planned MQ-25A Stingray refueling drone. (USNI News)
Police departments in Dorset, Cornwall, and Devon in the U.K. have been using drones to track reckless motorcyclists. (The Drive)
The town of Deadwood in South Dakota has approved an ordinance restricting the use of drones in the city. (Black Hills Pioneer)
Investigators have concluded that a mid-air collision in Australia that was thought to have been caused by a drone was actually caused by a bat. (ABC News)
A drone flying near the scene of a car crash in Avonport, Canada delayed the departure of a helicopter that was airlifting a patient to hospital. (CBC)
Canada’s OEX Recovery group will use a Kraken unmanned undersea vehicle to search for the remains of several subscale prototype jets that crashed into Lake Ontario in the 1950s. (Unmanned Systems Technology)
Solent Local Enterprise Partnership awarded BAE Systems a $593,871 grant to design a testing site for autonomous systems in the U.K. (Inside Unmanned Systems)
Iran Aircraft Manufacturing Industries announced that it will begin marketing the Hamaseh surveillance and strike drone to international customers. (FlightGlobal)
In this episode, Audrow Nash interviews Peter Corke, Professor of Robotics at the Queensland University of Technology and Director of the Australian Centre for Robotic Vision, about Robot Academy. Robot Academy is an online platform that provides free-to-use undergraduate-level learning resources for robotics and robotic vision.
The content was developed for two 6-week Massively Open Online Courses (MOOCs) that Corke taught in 2015 and 2016. This content is now available as individual lessons (over 200 videos, each less than 10 minutes long) or in masterclasses (collections of videos, around 1 hour in duration, previously a MOOC lecture). Unlike a MOOC, all lessons are available all the time.
While the content is typically designed for undergraduate-level students, around 20% of the lessons require no more than general knowledge. Each lesson is rated in terms of difficulty (on a 5-point scale), and Robot Academy references videos on Khan Academy to help students get up to speed to follow more advanced lessons.
Peter Corke
Peter Corke is Professor of Robotics and Control at the Queensland University of Technology leading the ARC Centre of Excellence for Robotic Vision in Australia. Previously he was a Senior Principal Research Scientist at the CSIRO ICT Centre where he founded and led the Autonomous Systems laboratory, the Sensors and Sensor Networks research theme and the Sensors and Sensor Networks Transformational Capability Platform. He is a Fellow of the IEEE. He was the Editor-in-Chief of the IEEE Robotics and Automation magazine; founding editor of the Journal of Field Robotics; member of the editorial board of the International Journal of Robotics Research, and the Springer STAR series. He has over 300 publications in the field and has held visiting positions at the University of Pennsylvania, University of Illinois at Urbana-Champaign, Carnegie-Mellon University Robotics Institute, and Oxford University.
When training to regain movement after stroke or spinal cord injury (SCI), patients must once again learn how to keep their balance during walking movements. Current clinical methods support the weight of the patient during movement, while setting the body off balance. This means that when patients are ready to walk without mechanical assistance, it can be hard to re-train the body to balance against gravity. This is the issue addressed in a recent paper published in Science Translational Medicine by a team lead by Courtine-Lab, and featuring Ijspeert Lab, NCCR Robotics and EPFL.
During walking, a combination of forces move the human body forward. In fact, the interaction of feet with the ground creates the majority of forward propulsion, but with every step, multiple muscles in the body are engaged to maintain movement and prevent falls. In order to fully regain the ability to walk, patients must develop both the muscles and the neural pathways required in these movements.
During partial body weight-supported gait therapy (whereby a patient trains on a treadmill while a robotic support system prevents them from falling), a patient is merely lifted upwards, with no support for forward or sideways movements, massively altering how the person within the support system moves. In fact, those within the training system use shorter steps, slower movements and less body rotation than the same people tested walking unaided.
In an effort to reduce these limitations of current therapy methods, the team developed a multi-directional gravity assist mechanism, meaning that the system supports patients not only in remaining upright, but also in moving forwards. This individually tailored support allows patients to walk in a natural and comfortable way, training the body to counterbalance against gravity and repositioning the torso in a natural position for walking.
The team developed a system, RYSEN, which allows patients to operate within a wide area, and in a range of activities, from standing and walking to walking along a slalom or horizontal ladder light projected onto the floor. They developed an algorithm to take measurements of how the patient is walking, and update the support given to them as they complete their training. The team found that all patients required the system to be tailored to them before use, but that by configuring the upward and forward forces applied during training, almost all subjects experienced significant improvements in movement with even small upward and forward forces on their torso. In fact, patients who experienced paralysis after SCI and stroke, found that by using the system, they were able to walk and thus begin to rebuild muscles and neurological pathways.
This work exists within a larger framework at NCCR Robotics, whereby researchers are using gravity-assisted technologies to play a key role in clinical trials on electrical spinal cord stimulation with the ultimate aim of creating technologies that will improve rehabilitation after spinal cord injury and stroke.
Today, all cars sold must comply with the Federal Motor Vehicles Safety Standards (FMVSS). This is a huge set of standards, and it’s full of things written with human driven cars in mind, and making a radically different vehicle, like the Zoox, or the Waymo Firefly, or a delivery robot, is simply not going to happen under those standards. There is a provision where NHTSA can offer exemptions but it’s in small volumes, for prototype and testing vehicles mostly. The new rules would allow a vendor to get an exemption to make 100,000 vehicles per year, which should be enough for the early years of robocar deployment.
Secondly, these and other new regulations would preempt state regulations. Most players (except some states) have pushed for this. Many states don’t want the burden of regulating robocar design, since they don’t have the resources to do so, and most vendors don’t want what they call a “patchwork” of 50 regulations in the USA. My take is different. I agree the cost of a patchwork is not to be ignored, but the benefits of having jurisdictional competition may be much greater. When California proposed to ban vehicles like the Google Firefly, Texas immediately said, “Come to Texas, we won’t get in your way.” That pushed California to rethink. Having one regulation is good — but it has to be the right regulation, and we’re much too early in the game to know what the right regulation is.
This is just a committee in the house, and there is lots more distance to go, including the Senate and all the other usual hurdles. Whatever people thought about how much regulation there should be, everybody has known that the FMVSS needs a difficult and complex revision to work in the world of robocars, and a temporary exemption can be a solution to that.
In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda).
Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features HEPHAESTUS: Highly automatEd PHysical Achievements and performancES using cable roboTs Unique SysHighly automatEd PHysical Achievements and performancES using cable roboTs Unique Systemstems.
Objectives
Hephaestus project addresses novel concepts to introduce Robotics and Autonomous Systems use in the Construction Sector where the presence of this type of products is minor or almost non-existent. It focuses to give novel solutions to one of the most important parts of the construction sector, the part related to the facades and the works that need to be done when this part of a building is built or need maintenance. It proposes a new automatized way to install these products providing a whole solution not only highly industrialized in production but also in installation and maintenance.
Expected impact
Hephaestus aims at automating the On-site Execution or Installation process for empowering and strengthening the Construction Sector in Europe and for positioning the European Robotic Industry as leader and reference in the huge and new growing market for the robotics. Hephaestus solution will allow reducing up to 90% the number of work accidents during façade installation process, reducing around 20% of installation cost and around 44% of the annual maintenance and cleaning costs. Curtain wall construction currently accounts for an annual market of €30,000 million in Europe.
Pic2Recipe, an artificial intelligence system developed at MIT, can take a photo of an entree and suggest a similar recipe to it. Photo: Jason Dorfman/MIT CSAIL
There are few things social media users love more than flooding their feeds with photos of food. Yet we seldom use these images for much more than a quick scroll on our cellphones. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analyzing photos like these could help us learn recipes and better understand people’s eating habits.
In a new paper with the Qatar Computing Research Institute (QCRI), the team trained an artificial intelligence system called Pic2Recipe to look at a photo of food and be able to predict the ingredients and suggest similar recipes.
“In computer vision, food is mostly neglected because we don’t have the large-scale datasets needed to make predictions,” says Yusuf Aytar, an MIT postdoc who co-wrote a paper about the system with MIT Professor Antonio Torralba. “But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences.”
The paper will be presented later this month at the Computer Vision and Pattern Recognition conference in Honolulu. CSAIL graduate student Nick Hynes was lead author alongside Amaia Salvador of the Polytechnic University of Catalonia in Spain. Co-authors include CSAIL postdoc Javier Marin, as well as scientist Ferda Ofli and research director Ingmar Weber of QCRI.
How it works
The web has spurred a huge growth of research in the area of classifying food data, but the majority of it has used much smaller datasets, which often leads to major gaps in labeling foods.
In 2014 Swiss researchers created the “Food-101” dataset and used it to develop an algorithm that could recognize images of food with 50 percent accuracy. Future iterations only improved accuracy to about 80 percent, suggesting that the size of the dataset may be a limiting factor.
Even the larger datasets have often been somewhat limited in how well they generalize across populations. A database from the City University in Hong Kong has over 110,000 images and 65,000 recipes, each with ingredient lists and instructions, but only contains Chinese cuisine.
The CSAIL team’s project aims to build off of this work but dramatically expand in scope. Researchers combed websites like All Recipes and Food.com to develop “Recipe1M,” a database of over 1 million recipes that were annotated with information about the ingredients in a wide range of dishes. They then used that data to train a neural network to find patterns and make connections between the food images and the corresponding ingredients and recipes.
Given a photo of a food item, Pic2Recipe could identify ingredients like flour, eggs, and butter, and then suggest several recipes that it determined to be similar to images from the database. (The team has an online demo where people can upload their own food photos to test it out.)
“You can imagine people using this to track their daily nutrition, or to photograph their meal at a restaurant and know what’s needed to cook it at home later,” says Christoph Trattner, an assistant professor at MODUL University Vienna in the New Media Technology Department who was not involved in the paper. “The team’s approach works at a similar level to human judgement, which is remarkable.”
The system did particularly well with desserts like cookies or muffins, since that was a main theme in the database. However, it had difficulty determining ingredients for more ambiguous foods, like sushi rolls and smoothies.
It was also often stumped when there were similar recipes for the same dishes. For example, there are dozens of ways to make lasagna, so the team needed to make sure that system wouldn’t “penalize” recipes that are similar when trying to separate those that are different. (One way to solve this was by seeing if the ingredients in each are generally similar before comparing the recipes themselves).
In the future, the team hopes to be able to improve the system so that it can understand food in even more detail. This could mean being able to infer how a food is prepared (i.e. stewed versus diced) or distinguish different variations of foods, like mushrooms or onions.
The researchers are also interested in potentially developing the system into a “dinner aide” that could figure out what to cook given a dietary preference and a list of items in the fridge.
“This could potentially help people figure out what’s in their food when they don’t have explicit nutritional information,” says Hynes. “For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal.”
The project was funded, in part, by QCRI, as well as the European Regional Development Fund (ERDF) and the Spanish Ministry of Economy, Industry, and Competitiveness.