In episode five of season three we compare and contrast AI and data science, take a listener question about getting started in machine learning, and listen to an interview with Joaquin Quiñonero Candela.
Talking Machines is now working with Midroll to source and organize sponsors for our show. In order find sponsors who are a good fit for us, and of worth to you, we’re surveying our listeners.
If you’d like to help us get a better idea of who makes up the Talking Machines community take the survey at http://podsurvey.com/MACHINES.
If you enjoyed this episode, you may also want to listen to:
Meanwhile, at the Los Angeles Times, Maya Lau writes that a civilian oversight board is pushing Los Angeles Sheriff’s Department to stop flying its drones.
In response to Transport’s report on drone impacts, a coalition of drone manufacturers pressed the government to release the data underpinning its findings. (BBC)
At Shephard News, Richard Thomas looks at how the commercial drone market continues to consolidate.
At an event in Washington, Gen. David Goldfein said that the Air Force needs better artificial intelligence in order to improve intelligence collection. (DefenseTech)
At an Ars Live event, Lisa Ling discussed her role as a drone imagery analyst for the U.S. Air National Guard. (Ars Technica)
Amazon has been granted a patent for a system by which its proposed delivery drones scan a customer’s home upon delivering a product in order to develop product recommendations for future purchases. (CNET)
British firm FlyLogix broke a national record for the longest beyond-line-of-sight drone flight during an 80km operation to inspect structures in the Irish Sea. (The Telegraph)
Rohde & Schwarz, ESG, and Diehl unveiled the Guardion, a counter-drone system. (Jane’s)
Researchers at Moscow Technological Institute have developed a defibrillator drone with a range of up to 50km. (TechCrunch)
The U.S. Army Aviation and Missile Research, Development, and Engineering Center is developing a robotic refueling system for helicopters. (Shephard Media)
India’s Defence Research and Development Organisation has developed an unmanned tank for reconnaissance and mine detection. (Economic Times)
Using hundreds of plastic ducks, researchers at University of Adelaide in Australia have demonstrated that drones are more effective for counting birds than traditional techniques. (New Scientist)
Drones at Work
A team from Queensland University of Technology in Australia is planning to use drones to count koalas as part of a conservation initiative. (Phys.org)
Matagorda County and Wharton County in Texas are acquiring three drones for a range of operations. (The Bay City Tribune)
The Fire Department and Police Department of Orange, Connecticut have acquired a drone for emergency operations. (Milford-Orange Bulletin)
A drone carrying cell phones and other contraband crashed into the yard at the Washington State Prison in Georgia. (Atlanta Journal-Constitution)
North Carolina has adopted a bill that expands drone rules to recreational model aircraft and prohibits drone use near prisons. (Triangle Business Journal)
The U.S. Air Force awarded the University of Arizona a $750,000 grant to build autonomous drones to patrol the U.S. border with Mexico. (Photonics)
The Dallas Safari Club Foundation awarded Delta Waterfowl, a duck hunting organization, a $10,000 grant to use drones to conduct a survey of duck nests. (Grand Forks Herald)
In a statement, Dassault CEO Éric Trappier said that the French-U.K. collaboration on a fighter drone will continue in spite of Brexit and a new Franco-German manned fighter project. (FlightGlobal)
A U.S. military study found that the cost of the Navy’s MQ-4C Triton program has risen by 17 percent. (IHS Jane’s Defense Weekly)
For updates, news, and commentary, follow us on Twitter.
In recent years engineers have been developing new technologies to enable robots and humans to move faster and jump higher. Soft, elastic materials store energy in these devices, which, if released carefully, enable elegant dynamic motions. Robots leap over obstacles and prosthetics empower sprinting. A fundamental challenge remains in developing these technologies. Scientists spend long hours building and testing prototypes that can reliably move in specific ways so that, for example, a robot lands right-side up upon landing a jump.
A pair of new computational methods developed by a team of researchers from Massachusetts Institute of Technology (MIT), University of Toronto and Adobe Research takes first steps towards automating the design of the dynamic mechanisms behind these movements. Their methods generate simulations that match the real-world behaviors of flexible devices at rates 70-times faster than previously possible and provide critical improvements in the accuracy of simulated collisions and rebounds. These methods are then both fast and accurate enough to be used to automate the design process used to create dynamic mechanisms for controlled jumping.
The team will present their methods and results from their paper, “Dynamics-Aware Numerical Coarsening for Fabrication Design,” at the SIGGRAPH 2017 conference in Los Angeles, 30 July to 3 August. SIGGRAPH spotlights the most innovative results in computer graphics research and interactive techniques worldwide.
“This research is pioneering work in applying computer graphics techniques to real physical objects with dynamic behavior and contact,” says lead author Desai Chen, a PhD candidate at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “The techniques we’ve developed open the door to automating the design of highly dynamic, fast-moving objects.”
Chen’s co-authors include David I.W. Levin, assistant professor at the University of Toronto; Wojciech Matusik, associate professor of electrical engineering and computer science at MIT; and Danny M. Kaufman, senior research scientist at Adobe Research.
Major advances in computational design, physical modeling and rapid manufacturing have enabled the fabrication of objects with customized physical properties–such as tailored sneakers, complex prosthetics, and soft robots–while computer graphics research has seen rapid improvements and efficiencies in creating compelling animations of physics for games, virtual reality and film. In this new work, the team aims to combine efficiency and accuracy to enable simulation for design fabrication, and to accurately simulate objects in motion.
“The goal is to bring the physical rules of virtual reality much closer to those of actual reality,” says Levin.
In the research, the team addresses the challenge with simulating elastic objects as they collide – making things accurate enough to match reality and fast enough to automate that design process. Attempting to create such simulations in the presence of contact, impact or friction remains time-consuming and inaccurate.
“It is very important to get this part right, and, until now, our existing computer codes tend to break down here,” says Kaufman. “We realize that if we are doing design for the real world, we have to have code that correctly models things such as high-speed bouncing, collision and friction.”
The researchers demonstrate their new methods, Dynamics-Aware Coarsening (DAC) and Boundary Balanced Impact (BBI), by designing and fabricating mechanisms that flip, throw and jump over obstacles. Their methods perform simulations much faster than existing, state-of-the-art approaches and with greater accuracy when compared to real-world motions.
DAC works by reducing degrees of freedom, the number of values that encode motion, to speed up simulations while still capturing important motions for dynamic scenarios. It finds the roughest meshes that can correctly represent the key shapes that will be taken by dynamics and matches the material properties of these meshes directly to recorded video experiment. BBI is a method for modeling impact behavior of elastic objects. It uses material properties to smoothly project velocities near impact sites to model many real world impact situations such as the impact and rebound between a soft printed material and a table, for instance.
The team was inspired by the need for faster, more accurate design tools that can capture accurate simulations of elastic objects undergoing deformation and collision – especially at high-speeds. These new methods could, down the road, be applied to robotics design, developing robots as they increasingly take on human-like movements and characteristics.
“This project is really a first step for us in pushing methods for simulating reality,” says Kaufman. “We are focusing on pushing them for automatic design and exploring how to effectively use them in design. We can create beautiful images in computer graphics and in animation, let’s extend these methods to actual objects in the real world that are useful, beautiful and efficient.”
From the Australian government’s new “data-driven profiling” trial for drug testing welfare recipients, to US law enforcement’s use of facial recognition technology and the deployment of proprietary software in sentencing in many US courts almost by stealth and with remarkably little outcry, technology is transforming the way we are policed, categorized as citizens and, perhaps one day soon, governed.
We are only in the earliest stages of so-called algorithmic regulation – intelligent machines deploying big data, machine learning and artificial intelligence (AI) to regulate human behaviour and enforce laws – but it already has profound implications for the relationship between private citizens and the state.
Furthermore, the rise of such technologies is occurring at precisely the moment when faith in governments across much of the Western world has plummeted to an all-time low. Voters across much of the developed world increasingly perceive establishment politicians and those who surround them to be out-of touch bubble-dwellers and are registering their discontent at the ballot box.
A technical solution
In this volatile political climate, there’s a growing feeling that technology can provide an alternative solution. Advocates of algorithmic regulation claim that many human-created laws and regulations can be better and more immediately applied in real-time by AI than by human agents, given the steadily improving capacity of machines to learn and their ability to sift and interpret an ever-growing flood of (often smartphone-generated) data.
AI advocates also suggest that, based on historical trends and human behaviour, algorithms may soon be able to shape every aspect of our daily lives, from how we conduct ourselves as drivers, to our responsibilities and entitlements as citizens, and the punishments we should receive for not obeying the law. In fact one does not have to look too far into the future to imagine a world in which AI could actually autonomously create legislation, anticipating and preventing societal problems before they arise.
Some may herald this as democracy rebooted. In my view it represents nothing less than a threat to democracy itself – and deep scepticism should prevail. There are five major problems with bringing algorithms into the policy arena:
1) Self-reinforcing bias
What machine learning and AI, in general, excel at (unlike human beings) is analysing millions of data points in real time to identify trends and, based on that, offering up “if this, then that” type conclusions. The inherent problem with that is it carries with it a self-reinforcing bias, because it assumes that what happened in the past will be repeated.
Let’s take the example of crime data. Black and minority neighborhoods with lower incomes are far more likely to be blighted with crime and anti-social behaviour than prosperous white ones. If you then use algorithms to shape laws, what will inevitably happen is that such neighbourhoods will be singled out for intensive police patrols, thereby increasing the odds of stand-offs and arrests.
This, of course, turns perfectly valid concerns about the high crime rate in a particular area into a self-fulfilling prophecy. If you are a kid born in an area targeted in this way, then the chances of escaping your environment grow ever slimmer.
This is already happening, of course. Predictive policing – which has been in use across the US since the early 2010s – has persistently faced accusations of being flawed and prone to deep-rooted racial bias. Whether or not predictive policing can sustainably reduce crime, remains to be proven.
2) Vulnerability to attack
A second and no less important issue around AI-shaped law is security. Virtually all major corporations, government institutions and agencies – including the US Department of Justice – have likely been breached at some point, largely because such organizations tend to lag far behind the hackers when it comes to securing data. It is, to put it mildly, unlikely that governments will be able to protect algorithms from attackers, and as algorithms tend to be “black boxed”, it’s unclear whether we’ll be able to identify if and when an algorithm has even been tampered with.
The recent debate in the US about alleged Russian hacking of the Democratic National Committee, which reportedly aided Donald Trump’s bid to become president, is a case in point. Similarly, owing to the complexity of the code that would need to be written to transfer government and judicial powers to a machine, it is a near certainty, given everything we know about software, that it would be riddled with bugs.
3) Who’s calling the shots?
There is also an issue around conflict of interest. The software used in policing and regulation isn’t developed by governments, of course, but by private corporations, often tech multinationals, who already supply government software and tend to have extremely clear proprietary incentives as well as, frequently, opaque links to government.
Such partnerships also raise questions around the transparency of these algorithms, a major concern given their impact on people’s lives. We live in a world in which government data is increasingly available to the public. This is a public good and I’m a strong supporter of it.
Yet the companies who are benefiting most from this free data surge show double standards: they are fierce advocates of free and open data when governments are the source, but fight tooth and nail to ensure that their own programming and data remains proprietary.
4) Are governments up to it?
Then there’s the issue of governments’ competence on matters digital. The vast majority of politicians in my experience have close to zero understanding of the limits of technology, what it can and cannot do. Politicians’ failure to grasp the fundamentals, let alone the intricacies, of the space means that they cannot adequately regulate the software companies that would be building the software.
If they are incapable of appreciating why backdoors cannot go hand-in-hand with encryption, they will likely be unable to make the cognitive jump to what algorithmic regulation, which has many more layers of complexity, would require.
Equally, the regulations that the British and French governments are putting in place, which give the state ever-expanding access to citizen data, suggest they do not understand the scale of the risk they are creating by building such databases. It is certainly just a matter of time before the next scandal erupts, involving a massive overreach of government.
5) Algorithms don’t do nuance
Meanwhile, arguably reflecting the hubristic attitude in Silicon Valley that there are few if any meaningful problems that tech cannot solve, the final issue with the AI approach to regulation is that there is always an optimal solution to every problem.
Yet fixing seemingly intractable societal issues requires patience, compromise and, above all, arbitration. Take California’s water shortage. It’s a tale of competing demands – the agricultural industry versus the general population; those who argue for consumption to be cut to combat climate change, versus others who say global warming is not an existential threat. Can an algorithm ever truly arbitrate between these parties? On a macro level, is it capable of deciding who should carry the greatest burden regarding climate change: developed countries, who caused the problem in the first place, or developing countries who say it’s their time to modernize now, which will require them to continue to be energy inefficient?
My point here is that algorithms, while comfortable with black and white, are not good at coping with shifting shades of gray, with nuance and trade-offs; at weighing philosophical values and extracting hard-won concessions. While we could potentially build algorithms that implement and manage a certain kind of society, we would surely first need to agree what sort of society we want.
And then what happens when that society undergoes periodic (rapid or gradual) fundamental change? Imagine, for instance, the algorithm that would have been built when slavery was rife, being gay was unacceptable and women didn’t have the right to vote. Which is why, of course, we elect governments to base decisions not on historical trends but on visions which the majority of voters buy into, often honed with compromise.
Much of what civil societies have to do is establish an ever-evolving consensus about how we want our lives to be. And that’s not something we can outsource completely to an intelligent machine.
Setting some ground rules
All the problems notwithstanding, there’s little doubt that AI-powered government of some kind will happen. So, how can we avoid it becoming the stuff of bad science fiction?
To begin with, we should leverage AI to explore positive alternatives instead of just applying it to support traditional solutions to society’s perceived problems. Rather than simply finding and sending criminals to jail faster in order to protect the public, how about using AI to figure out the effectiveness of other potential solutions? Offering young adult literacy, numeracy and other skills might well represent a far superior and more cost-effective solution to crime than more aggressive law enforcement.
Moreover, AI should always be used at a population level, rather than at the individual level, in order to avoid stigmatizing people on the basis of their history, their genes and where they live. The same goes for the more subtle, yet even more pervasive data-driven targeting by prospective employers, health insurers, credit card companies and mortgage providers. While the commercial imperative for AI-powered categorization is clear, when it targets individuals it amounts to profiling with the inevitable consequence that entire sections of society are locked out of opportunity.
To be sure, not all companies use data against their customers. When a 2015 Harvard Business School study, and subsequent review by Airbnb, uncovered routine bias against black and ethnic minority renters using the home-sharing platform, Airbnb executives took steps to clamp down on the problem. But Airbnb could have avoided the need for the study and its review altogether, because a really smart application of AI algorithms to the platform’s data could have picked up the discrimination much earlier and perhaps also have suggested ways of preventing it. This approach would exploit technology to support better decision-making humans, rather than displace humans as decision-makers.
To realize the potential of this approach in the public sector, governments need to devise a methodology that starts with a debate about what the desired outcome would be from the deployment of algorithms, so that we can understand and agree exactly what we want to measure the performance of the algorithms against.
Secondly – and politicians would need to get up to speed here – there would need to be a real-time and constant flow of data on algorithm performance for each case in which they are used, so that algorithms can continually adapt to reflect changing circumstances and needs.
Thirdly, any proposed regulation or legislation that is informed by the application of AI should be rigorously tested against a traditional human approach before being passed into law.
Finally, any for-profit company that uses public sector data to strengthen or improve its own algorithm should either share future profits with the government or agree an arrangement whereby said algorithm will at first be leased and, eventually, owned by the government.
Make no mistake, algorithmic regulation is on its way. But AI’s wider introduction into government needs to be carefully managed to ensure that it’s harnessed for the right reasons – for society’s betterment – in the right way. The alternative risks a chaos of unintended consequences and, ultimately, perhaps democracy itself.
Child using robot-driven TPAD training method to improve crouch gait, symptom of cerebral palsy. —Photo courtesy of Sunil Agrawal/Columbia Engineering
In the U.S., 3.6 out of 1000 school-aged children are diagnosed with cerebral palsy (CP). Their symptoms include abnormal gait patterns which results in joint degeneration over time. Slow walking speed, reduced range of motion of the joints, small step length, large body sway, and absence of a heel strike are other difficulties that children with CP experience. A subset of these children exhibit crouch gait which is characterized by excessive flexion of the hips, knees, or ankles.
A team led by Sunil Agrawal, professor of mechanical engineering and of rehabilitation and regenerative medicine at Columbia Engineering, has published a pilot study in Science Robotics that demonstrates a robotic training method that improves posture and walking in children with crouch gait by enhancing their muscle strength and coordination.
Crouch gait is caused by a combination of weak extensor muscles that do not produce adequate muscle forces to keep posture upright, coupled with tight flexor muscles that limit the joint range of motion. Among the extensor muscles, the soleus, a muscle that runs from just below the knee to the heel, plays an important role in preventing knee collapse during the middle of the stance phase when the foot is on the ground. Critical to standing and walking, the soleus muscle keeps the shank upright during the mid-stance phase of the gait to facilitate extension of the knee. It also provides propulsive forces on the body during the late stance phase of the gait cycle.
“One of the major reasons for crouch gait is weakness in soleus muscles,” says Agrawal, who is also a member of the Data Science Institute. “We hypothesized that walking with a downward pelvic pull would strengthen extensor muscles, especially the soleus, against the applied downward pull and would improve muscle coordination during walking. We took an approach opposite to conventional therapy with these children: instead of partial body weight suspension during treadmill walking, we trained participants to walk with a force augmentation.”
The research group knew that the soleus, the major weight-bearing muscle during single stance support, is activated more strongly among the lower leg muscles when more weight is added to the human body during gait. They reasoned that strengthening the soleus might help children with crouch gait to stand and walk more easily.
To test their hypothesis, the team used a robotic system — Tethered Pelvic Assist Device (TPAD) — invented in Agrawal’s Robotics and Rehabilitation (ROAR) Laboratory. The TPAD is a wearable, lightweight cable-driven robot that can be programmed to provide forces on the pelvis in a desired direction as a subject walks on a treadmill. The researchers worked with six children diagnosed with CP and exhibiting crouch gait for fifteen 16-minute training sessions over a duration of six weeks. While the children walked on treadmills, they wore the TPAD as a lightweight pelvic belt to which several wires were attached. The tension in each TPAD wire was controlled in real time by a motor placed on a stationary frame around the treadmill, based on real-time motion capture data from cameras. The researchers programmed the TPAD to apply an additional downward force through the center of the pelvis to intensively retrain the activity of the soleus muscles. They used a downward force equivalent to 10 percent of body weight, based on the results of healthy children carrying backpacks. This was the minimum weight needed to show notable changes in posture or gait during walking.
“TPAD is a unique device because it applies external forces on the human body during walking,” says Jiyeon Kang, PhD candidate and lead author of the paper. “The training with this device is distinctive because it does not add mass/inertia to the human body during walking.”
The team examined the children’s muscle strength and coordination using electromyography data from the first and last sessions of training and also monitored kinematics and ground reaction forces continuously throughout the training. They found that their training was effective; it both enhanced the children’s upright posture and improved their muscle coordination. In addition, their walking features, including step length, range of motion of the lower limb angles, toe clearance, and heel-to-toe pattern, improved.
“Currently, there is no well-established physical therapy or strengthening exercise for the treatment of crouch gait,” Agrawal notes.
Heakyung Kim, A. David Gurewitsch Professor of Rehabilitation and Regenerative Medicine and Professor of Pediatrics at the Columbia University Medical Center, who treats these patients, added “Feedback from the parents and children involved in this study was consistent. They reported improved posture, stronger legs, and faster walking speed, and our measurements bear that out. We think that our robotic TPAD training with downward pelvic pull could be a very promising intervention for these children.”
The researchers are planning more clinical trials, to test a larger group and changing more variables. They are also considering studying children with hemiplegic/quadriplegic CP.
The Cocktail Bot 4.0 consists of five robots with one high-level goal: Mix one more than 20 possible drink combination for you! But it isn’t as easy as it sounds. After the customer composed his drink by combining liquor, soft drink and ice in a web interface. The robots start to mix the drink on their own. Five robot stations are preparing the order to deliver it to the guests.
The first robot, a Universal Robots UR5, takes a glass out of an industrial dishwasher rack. The challenge here is, that the glasses are placed upside down in the rack and have to be turned. Furthermore, there are two types of glasses – one for long drinks and one for shots like ‘whisky on the rocks’. The problem was mainly solved with the design of custom gripper fingers. They made it possible to grasp, turn and release the different types of glasses without an intermediate manipulation step. Also, some rubber bands increased the friction and made it possible to let the glass slide down smoothly on the belt. After releasing the glass, the glass tracking started to determine the exact pose.
To get to know the exact position of the glass on the conveyor belt an image processing pipeline calculated its pose. Especially, the transparency of the glass itself made it difficult to detect them reliably at every position. Otherwise the ice cubes or the liquor where not poured into the glass, but off target.
While the glass was placed on the center of the conveyor belt by the first robot, the second robot, a Schunk LWA 4P, started to fill its shovel with ice cubes out of an ice box. It is tricky as the ice cubes stick together after some time and they also change their form by melting. Again, a custom designed gripper guaranteed to get the right amount of ice cubes in each glass.
After ice was added the next step was to prepare the liquor. In total, there were four different kinds of shots – gin, whisky, rum and vodka. All of the liquors where in their original bottles and the third robot, a KUKA KR10 in combination with a Robotiq Three-Finger-Gripper, grasped them precisely. A special liquid nozzle made sure that only 4cl of liquor were poured in each glass after the robot placed the bottle opening above the glass. Pouring while following the movement of the glass made this process independent of liquid level or bottle type.
At the end of the first conveyor belt the fourth robot, again a UR5 with a Schunk PG70 gripper, waited for the arrival of the glass. If the guest just ordered a shot the glass was moved onto the second conveyor belt. Otherwise one of the soft drinks was added. Apart from sparkling and tap water, the taping system provided coke, tonic water, bitter lemon and orange juice. When the right amount of soft drink was added to the drink, the long drink glass was also placed on the other belt.
Only one part missing: The straw. While the fourth robot prepared the drink the fifth and biggest robot, a Universal Robots UR10 and a Weiss WSG-25 gripper, started to get a straw out of the straw dispenser standing next to it. After picking one, the arm moved to its waiting pose above the conveyor belt until the glass arrived. Again, custom designed gripper fingers made it possible to pick a straw out of the box as well as grasping the glass filled with liquids.
When the glass was within reach, the gripper released the straw into the glass and the arm approached nicely towards the glass to grasp it and place it on an interactive table. This was used to show the placed orders as well as the current drink making progress.
All the robots had to work synchronized, with almost no free space around them and close distance to the guests. The Robot Operating System (ROS) made it possible, to control all different kind of robotic arms and grippers within one high-level controller. Each robot station was triggered separately to increase the robustness and also the possibilities of extending the demonstrator for future parties.
The Cocktail Bot 4.0 was created and programmed by a small team of researchers from the FZI Research Center for Information Technologies in Karlsruhe, Germany.
Android textured with flag of china. Technology concept. Isolated
China has recently announced their long-term goal to become #1 in A.I. by 2030. They plan to grow their A.I. industry to over $22 billion by 2020, $59 billion by 2025 and $150 billion by 2030. They did this same type of long-term strategic planning for robotics – to make it an in-country industry and to transform the country from a low-cost labor source to a high-tech manufacturing resource, and it’s working.
China's Artificial Intelligence Manifesto
With this major strategic long-term push into A.I., China is looking to rival U.S. market leaders such as Alphabet/Google, Apple, Amazon, IBM and Microsoft. China is keen not to be left behind in a technology that is increasingly pivotal — from online commerce to self-driving vehicles, energy, and consumer products. China aims to catch up by solving issues including a lack of high-end computer chips, software that writes software, and trained personnel. Beijing will play a big role in policy support and regulation as well as providing and funding research, incentives and tax credits.
The local and central government are supporting this AI effort,” said Rui Yong, chief technology officer at PC maker Lenovo Group. “They see this trend coming and they want to invest more.
Many cited the defeat of the world's top Go players from China and South Korea by the Google-owned A.I. company DeepMind and their AlphaGo game-playing software as the event that caused China's State Council to enact and launch its A.I. plan which it announced on July 20th. The NY Times called it “a sort of Sputnik moment for China.”
Included in the announcement:
China will be investing heavily to ensure its companies, government and military leap to the front of the pack in a technology many think will one day form the basis of computing.
The plan covers almost every field: from using the technology for voice recognition to dispatching robots for deep-sea and Arctic exploration, as well as using AI in military security. The Council said the country must “firmly grasp this new stage of AI development.”
China said it plans to build “special-force” AI robots for ocean and Arctic exploration, use the technology for gathering evidence and reading court documents, and also use machines for “emotional interaction functions.”
In the final stage, by 2030, China will “become the world’s premier artificial intelligence innovation center,” which in turn will “foster a new national leadership and establish the key fundamentals for an economic great power.”
Chinese Investments in A.I.
The DoD regularly warns that Chinese money has been flowing into American A.I. companies — some of the same companies it says are likely to help the United States military develop future weapons systems. The NY Times cites the following example:
When the United States Air Force wanted help making military robots more perceptive, it turned to a Boston-based artificial intelligence start-up called Neurala. But when Neurala needed money, it got little response from the American military.
So Neurala turned to China, landing an undisclosed sum from an investment firm backed by a state-run Chinese company.
Chinese firms have become significant investors in American start-ups working on cutting-edge technologies with potential military applications. The start-ups include companies that make rocket engines for spacecraft, sensors for autonomous navy ships, and printers that make flexible screens that could be used in fighter-plane cockpits. Many of the Chinese firms are owned by state-owned companies or have connections to Chinese leaders.
Chinese venture firms have offices in Silicon Valley, Boston and other areas where A.I. startups are happening. Many Chinese companies — such as Baidu — have American-based research centers to take advantage of local talent.
The Committee on Foreign Investment in the United States (CFIUS), which reviews U.S. acquisitions by foreign entities for national security risks, appears to be blind to all of this.
China's Robot Manifesto Has Been Quite Successful
Chinese President Xi Jinping initiated “a robot revolution” and launched the “Made in China 2025” program. More than 1,000 firms and a new robotics association, CRIA (Chinese Robotics Industry Alliance) have emerged (or begun to transition) into robotics to take advantage of the program. By contrast, the sector was virtually non-existent a decade ago.
Under “Made in China 2025,” and the five-year robot plan launched last April, Beijing is focusing on automating key sectors of the economy including car manufacturing, electronics, home appliances, logistics, and food production. At the same time, the government wants to increase the share of in-country-produced robots to more than 50% by 2020; up from 31% last year and to be able to make 150,000 industrial robots in 2020; 260,000 in 2025; and 400,000 by 2030. China's stated goal in both their 5-year plan and Made in China 2025 program is to overtake Germany, Japan, and the United States in terms of manufacturing sophistication by 2049, the 100th anniversary of the founding of the People’s Republic of China. To make that happen, the government needs Chinese manufacturers to adopt robots by the millions. It also wants Chinese companies to start producing more of those robots and has encouraged strategic acquisitions.
Four of the top 15 acquisitions in 2016 were of robotic-related companies by Chinese acquirers:
Midea, a Chinese consumer products manufacturer, acquired KUKA, one of the Big 4 global robot manufacturers
The Kion Group, a predominately Chinese-funded warehousing systems and equipment conglomerate, acquired Dematic, a large European AGV and material handling systems company
KraussMaffei, a big German industrial robots integrator, was acquired by ChemChina
Paslin, a US-based industrial robot integrator, was acquired by Zhejiang Wanfeng Technology, a Chinese industrial robot integrator
Singapore and MIT have been at the forefront of autonomous vehicle development. First, there were self-driving golf buggies. Then, an autonomous electric car. Now, leveraging similar technology, MIT and Singaporean researchers have developed and deployed a self-driving wheelchair at a hospital.
Spearheaded by Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of MIT’s Computer Science and Artificial Intelligence Laboratory, this autonomous wheelchair is an extension of the self-driving scooter that launched at MIT last year — and it is a testament to the success of the Singapore-MIT Alliance for Research and Technology, or SMART, a collaboration between researchers at MIT and in Singapore.
Rus, who is also the principal investigator of the SMART Future Urban Mobility research group, says this newest innovation can help nurses focus more on patient care as they can get relief from logistics work which includes searching for wheelchairs and wheeling patients in the complex hospital network.
“When we visited several retirement communities, we realized that the quality of life is dependent on mobility. We want to make it really easy for people to move around,” Rus says.
A magnetic folding robot arm can grasp and bend thanks to its pattern of origami-inspired folds and a wireless electromagnetic field. Credit: Wyss Institute at Harvard University
The traditional Japanese art of origami transforms a simple sheet of paper into complex, three-dimensional shapes through a very specific pattern of folds, creases, and crimps. Folding robots based on that principle have emerged as an exciting new frontier of robotic design, but generally require onboard batteries or a wired connection to a power source, making them bulkier and clunkier than their paper inspiration and limiting their functionality.
A team of researchers at the Wyss Institute for Biologically Inspired Engineering and the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University has created battery-free folding robots that are capable of complex, repeatable movements powered and controlled through a wireless magnetic field.
“Like origami, one of the main points of our design is simplicity,” says co-author Je-sung Koh, Ph.D., who conducted the research as a Postdoctoral Fellow at the Wyss Institute and SEAS and is now an Assistant Professor at Ajou University in South Korea. “This system requires only basic, passive electronic components on the robot to deliver an electric current—the structure of the robot itself takes care of the rest.”
The research team’s robots are flat and thin (resembling the paper on which they’re based) plastic tetrahedrons, with the three outer triangles connected to the central triangle by hinges, and a small circuit on the central triangle. Attached to the hinges are coils made of a type of metal called shape-memory alloy (SMA) that can recover its original shape after deformation by being heated to a certain temperature. When the robot’s hinges lie flat, the SMA coils are stretched out in their “deformed” state; when an electric current is passed through the circuit and the coils heat up, they spring back to their original, relaxed state, contracting like tiny muscles and folding the robots’ outer triangles in toward the center. When the current stops, the SMA coils are stretched back out due to the stiffness of the flexure hinge, thus lowering the outer triangles back down.
The power that creates the electrical current needed for the robots’ movement is delivered wirelessly using electromagnetic power transmission, the same technology inside wireless charging pads that recharge the batteries in cell phones and other small electronics. An external coil with its own power source generates a magnetic field, which induces a current in the circuits in the robot, thus heating the SMA coils and inducing folding. In order to control which coils contract, the team built a resonator into each coil unit and tuned it to respond only to a very specific electromagnetic frequency. By changing the frequency of the external magnetic field, they were able to induce each SMA coil to contract independently from the others.
“Not only are our robots’ folding motions repeatable, we can control when and where they happen, which enables more complex movements,” explains lead author Mustafa Boyvat, Ph.D., also a Postdoctoral Fellow at the Wyss Institute and SEAS.
Just like the muscles in the human body, the SMA coils can only contract and relax: it’s the structure of the body of the robot — the origami “joints” — that translates those contractions into specific movements. To demonstrate this capability, the team built a small robotic arm capable of bending to the left and right, as well as opening and closing a gripper around an object. The arm is constructed with a special origami-like pattern to permit it to bend when force is applied, and two SMA coils deliver that force when activated while a third coil pulls the gripper open. By changing the frequency of the magnetic field generated by the external coil, the team was able to control the robot’s bending and gripping motions independently.
There are many applications for this kind of minimalist robotic technology; for example, rather than having an uncomfortable endoscope put down their throat to assist a doctor with surgery, a patient could just swallow a micro-robot that could move around and perform simple tasks, like holding tissue or filming, powered by a coil outside their body. Using a much larger source coil — on the order of yards in diameter — could enable wireless, battery-free communication between multiple “smart” objects in an entire home. The team built a variety of robots — from a quarter-sized flat tetrahedral robot to a hand-sized ship robot made of folded paper — to show that their technology can accommodate a variety of circuit designs and successfully scale for devices large and small. “There is still room for miniaturization. We don’t think we went to the limit of how small these can be, and we’re excited to further develop our designs for biomedical applications,” Boyvat says.
“When people make micro-robots, the question is always asked, ‘How can you put a battery on a robot that small?’ This technology gives a great answer to that question by turning it on its head: you don’t need to put a battery on it, you can power it in a different way,” says corresponding author Rob Wood, Ph.D., a Core Faculty member at the Wyss Institute who co-leads its Bioinspired Robotics Platform and the Charles River Professor of Engineering and Applied Sciences at SEAS.
“Medical devices today are commonly limited by the size of the batteries that power them, whereas these remotely powered origami robots can break through that size barrier and potentially offer entirely new, minimally invasive approaches for medicine and surgery in the future,” says Wyss Founding Director Donald Ingber, who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as a Professor of Bioengineering at Harvard’s School of Engineering and Applied Sciences.
Imagine rescuers searching for people in the rubble of a collapsed building. Instead of digging through the debris by hand or having dogs sniff for signs of life, they bring out a small, air-tight cylinder. They place the device at the entrance of the debris and flip a switch. From one end of the cylinder, a tendril extends into the mass of stones and dirt, like a fast-climbing vine. A camera at the tip of the tendril gives rescuers a view of the otherwise unreachable places beneath the rubble.
This is just one possible application of a new type of robot created by mechanical engineers at Stanford University, detailed in a June 19 Science Robotics paper. Inspired by natural organisms that cover distance by growing — such as vines, fungi and nerve cells — the researchers have made a proof of concept of their soft, growing robot and have run it through some challenging tests.
“Essentially, we’re trying to understand the fundamentals of this new approach to getting mobility or movement out of a mechanism,” explained Allison Okamura, professor of mechanical engineering and senior author of the paper. “It’s very, very different from the way that animals or people get around the world.”
To investigate what their robot can do, the group created prototypes that move through various obstacles, travel toward a designated goal, and grow into a free-standing structure. This robot could serve a wide range of purposes, particularly in the realms of search and rescue and medical devices, the researchers said.
A growing robot
The basic idea behind this robot is straightforward. It’s a tube of soft material folded inside itself, like an inside-out sock, that grows in one direction when the material at the front of the tube everts, as the tube becomes right-side-out. In the prototypes, the material was a thin, cheap plastic and the robot body everted when the scientists pumped pressurized air into the stationary end. In other versions, fluid could replace the pressurized air.
What makes this robot design extremely useful is that the design results in movement of the tip without movement of the body.
“The body lengthens as the material extends from the end but the rest of the body doesn’t move,” explained Elliot Hawkes, a visiting assistant professor from the University of California, Santa Barbara and lead author of the paper. “The body can be stuck to the environment or jammed between rocks, but that doesn’t stop the robot because the tip can continue to progress as new material is added to the end.”
Graduate students Joseph Greer, left, and Laura Blumenschein, right, work with Elliot Hawkes, a visiting assistant professor from the University of California, Santa Barbara, on a prototype of the vinebot. (Image credit: L.A. Cicero)
The group tested the benefits of this method for getting the robot from one place to another in several ways. It grew through an obstacle course, where it traveled over flypaper, sticky glue and nails and up an ice wall to deliver a sensor, which could potentially sense carbon dioxide produced by trapped survivors. It successfully completed this course even though it was punctured by the nails because the area that was punctured didn’t continue to move and, as a result, self-sealed by staying on top of the nail.
In other demonstrations, the robot lifted a 100-kilogram crate, grew under a door gap that was 10 percent of its diameter and spiraled on itself to form a free-standing structure that then sent out a radio signal. The robot also maneuvered through the space above a dropped ceiling, which showed how it was able to navigate unknown obstacles as a robot like this might have to do in walls, under roads or inside pipes. Further, it pulled a cable through its body while growing above the dropped ceiling, offering a new method for routing wires in tight spaces.
Difficult environments
“The applications we’re focusing on are those where the robot moves through a difficult environment, where the features are unpredictable and there are unknown spaces,” said Laura Blumenschein, a graduate student in the Okamura lab and co-author of the paper. “If you can put a robot in these environments and it’s unaffected by the obstacles while it’s moving, you don’t need to worry about it getting damaged or stuck as it explores.”
Some iterations of these robots included a control system that differentially inflated the body, which made the robot turn right or left. The researchers developed a software system that based direction decisions on images coming in from a camera at the tip of the robot.
A primary advantage of soft robots is that they can be safer than hard, rigid robots not only because they are soft but also because they are often lightweight. This is especially useful in situations where a robot could be moving in close quarters with a person. Another benefit, in the case of this robot, is that it is flexible and can follow complicated paths. This, however, also poses some challenges.
Joey Greer, a graduate student in the Okamura lab and co-author of the paper, said that controlling a robot requires a precise model of its motion, which is difficult to establish for a soft robot. Rigid robots, by comparison, are much easier to model and control, but are unusable in many situations where flexibility or safety is necessary. “Also, using a camera to guide the robot to a target is a difficult problem because the camera imagery needs to be processed at the rate it is produced. A lot of work went into designing algorithms that both ran fast and produced results that were accurate enough for controlling the soft robot,” Greer said.
Going big—and small
As it exists now, the scientists built the prototype by hand and it is powered through pneumatic air pressure. In the future, the researchers would like to create a version that would be manufactured automatically. Future versions may also grow using liquid, which could help deliver water to people trapped in tight spaces or to put out fires in closed rooms. They are also exploring new, tougher materials, like rip-stop nylon and Kevlar.
The vinebot is a tube of soft material that grows in one direction. (Image credit: L.A. Cicero)
The researchers also hope to scale the robot much larger and much smaller to see how it performs. They’ve already created a 1.8 mm version and believe small growing robots could advance medical procedures. In place of a tube that is pushed through the body, this type of soft robot would grow without dragging along delicate structures.
Okamura is a member of Stanford Bio-X and the Stanford Neurosciences Institute.
This research was funded by the National Science Foundation.
Recent events demonstrate the growing presence of indoor mobile robots: (1) Savioke’s hotel butler robot won the 2017 IERA inventors award; (2) Knightscope’s security robot mistook a reflecting pond for a solid floor and dove in face-first to the delight of Twitterdom and the media; and (3) the sale of robotic hospital delivery provider Aethon to a Singaporean conglomerate.
Are we beginning to enter an era of multi-functional robots? Certainly that is the vision of each of the vendors listed below. They see their robots greet, assist and run errands during business hours and then, after hours, prowl and tally inventory and fixed assets, and all the while – 24/7 – check for anomalies and things that are suspicious. SuperRobot? Or one of the many new mobile service robots that offer each of these services as separate tasks? For example, Savioke, the hotel butler robot, is now using their Relay robots with FedEx in the warehousing and logistics sector.
The indoor robot marketplace
Travis Deyle, CEO of Silicon Valley startup Cobalt Robotics which is developing indoor robots for security purposes, in an article in IEEE Spectrum, posited that commercial spaces are the next big marketplace for robotics and that there’s a massive, untapped market in each of the commercial spaces shown in his chart below:
“Commercial spaces could serve as a great stepping stone on the path toward general-purpose home robots by driving scale, volume, and capabilities. So… while billions are being spent on R&D for autonomous vehicles, indoor robots for commercial and public spaces reap the technology and cost benefits on sensors, computing, machine learning, and open-source software.”
Although the chart above focuses on the many applications within the commercial space, there is also much activity in various forms of indoor material handling using mobile robots in warehouses and distribution centers. The list of companies in that marketplace is quite large and will be detailed in a future article.
Hospital mobile robot firm sells to Singaporean conglomerate
ST Engineering acquired Pittsburgh, PA-based hospital robotics firm Aethon for $36 million. Aethon provides intralogistics in hospital environments by delivering goods and supplies using its TUG autonomous mobile robots. ST Engineering's strategic reasoning for the acquisition can be understood by this statement about the purchase:
“We evaluated the autonomous mobile robotics market thoroughly. Our evaluation led us to conclude that Aethon was the best company in this space having the right technology along with proven success in the commercialization and installation of autonomous mobile robots,” said Khee Loon Foo, General Manager, Kinetics Advanced Robotics of ST Kinetics.
Hotel robot wins 2017 IERA Inventors Award
The International Federation of Robotics (IFR) and the IEEE Robotics and Automation Society (IEEE/RAS) jointly sponsor an annual IERA (Innovation and Entrepreneurship in Robotics and Automation) Award which this year was presented to the Relay butler robot made by Savioke, a Silicon Valley startup.
Savioke's Relay robot makes deliveries all on its own in hotels, hospitals or logistics centers. Thanks to artificial intelligence and sensor technology, the robot can move safely through public spaces and navigate around people and obstacles dynamically.
The robots, which have already completed over 100,000 deliveries, can be seen in selected hotels in California and New York, Asia and the Middle East.
Indoor Robot Companies
Listed below are a few of the companies in the emerging mobile robot indoor commercial marketplaces described in Deyle's chart above. The list is not comprehensive but intended to give you an overview of who those new companies are, how far along they are, and how global they are.
Indoor Security Robots:
Recent research reports covering the security robots marketplace forecast that the market will reach $2.4 billion by 2022 at a CAGR of 9% from now til then. These forecasts include indoor and outdoor robots.
Knightscope is a Silicon Valley security robot startup with robots in shopping malls, exhibition halls, parking lots and office complexes. It was Knightscope's robot that took the face dive in the Washington, DC pond. [Graphic of Knightscope robot from Twitter.]
Cobalt Robotics, also a Silicon Valley security robotics startup, but, as described by co-founder Travis Deyle, “Security is just one entrée to the whole emerging world of indoor robotics.”
Gamma Two Robotics, a Colorado patrol robot maker, whose new Ramsee mobile robots have sensors for heat, toxic gas, motion detection and acoustic listening.
NxT Robotics is a San Diego mobile robot startup offering both an indoor (Iris) and outdoor (Scorpion) security patrol solution.
SMP Robotics is a San Francisco maker of mobile security robots for outdoor and indoor facilities.
Anbot(Hunan Wanwei Intelligent Robot Technology Co.) is a Chinese security robot with a robot very similar looking to Knightscope's. It can be seen prowling airport and museum public spaces in China.
Robot Security Systems is a Netherlands-based startup indoor mobile security robot provider.
Indoor Guides, Assistants, Greeters, Food Handlers and Gofor Robots:
This list could be much larger – particularly the gofor robots in the material handling field – but has been limited to those startups delivering product or with working prototypes focused on one or all of the commercial indoor market sectors.
MetraLabs is a German provider of fully autonomous mobile inventory, public space guide and retail robots for stores, malls and museums.
PAL Robotics is a Spanish maker of humanoid robots used as guides, entertainers, information providers and presenters – in multiple languages.
Pangolin Robot is a Chinese maker of restaurant server/waiter/busing robots and which also has a line of greeting and delivery robots.
Simbe Roboticsis a San Francisco provider of a retail space inventory robot auditing shelves for out-of-stock items, low stock items, misplaced items, and pricing errors. Simbe's Tally robot can perform during normal store hours alongside shoppers and employees or autonomously after hours.
Bossa Nova Roboticsis a Pittsburgh developer of a store robot that scans products on the shelves, makes store maps and helps employees keep track of where items are located.
BlueBotics is a Swiss provider of mobile robots, robotic platforms and products for mobile guides, marketing assistants and industrial cleaning.
Pepper, the mobile emotion-detecting robot jointly produced by Foxconn, Alibaba and SoftBank, is serving as the first point of contact in coffee stores, banks, corporate offices and other public spaces.
Yujin Robot is a Korean consumer products maker with a line of hotel and restaurant delivery robots.
Fellow Robots is a Silicon Valley developer of the NAVii robot which is used as a greeter but also maps and performs inventory scans.
Singapore Technologies Engineering Ltd (ST Engineering) has acquired Pittsburgh, PA-based robotics firm Aethon Inc through Vision Technologies Land Systems, Inc. (VTLS), and its wholly-owned subsidiary, VT Robotics, Inc, for $36 million.
The acquisition will be carried out by way of a merger with VT Robotics, a special newly incorporated entity established for the transaction. The merger will see Aethon as the surviving entity that will operate as a subsidiary of VTLS, and will be part of the the ST Group’s Land Systems sector. Aethon’s leadership team and employees will remain in place and the company will continue to operate out of its Pittsburgh, PA location.
ST Engineering, S63 on the Singapore Stock Exchange, is a Singapore-based integrated defense and engineering group focused in aerospace, electronics, and land, sea and air unmanned systems for the battlefield. It employs over 21,000 people and has annual revenues of around $5 billion.
“We evaluated the autonomous mobile robotics market thoroughly. Our evaluation led us to conclude that Aethon was the best company in this space having the right technology along with proven success in the commercialization and installation of autonomous mobile robots. We look forward to working with the Pittsburgh, PA team to grow the company,” says Khee Loon Foo, General Manager, Kinetics Advanced Robotics of ST Kinetics.
Aethon provides intralogistics in manufacturing and hospital environments by delivering goods and supplies using its TUG autonomous mobile robots. TUGs are self-driving autonomous robots capable of hauling or towing up to 1,400 lbs as it dynamically and safely navigates around people and the corridors of client facilities.
“This acquisition is a terrific event for our company, employees and our customers since it provides Aethon with the resources and corporate backing to grow and develop new innovative robotic technology and more aggressively pursue new markets. We will now be able to expand our development capabilities to enhance our current technology and bring exciting logistics solutions to new vertical and global markets,” says Aldo Zini, CEO of Aethon.
It comes down to the question of what a robot really is. While science fiction has often portrayed robots as androids carrying out tasks in the much the same way as humans, the reality is that robots take much more specialised forms. Traditional 20th century robots were automated machines and robotic arms building cars in factories. Commercial 21st century robots are supermarket self-checkouts, automated guided warehouse vehicles, and even burger-flipping machines in fast-food restaurants.
Ultimately, humans haven’t become completely redundant because these robots may be very efficient but they’re also kind of dumb. They do not think, they just act, in very accurate but very limited ways. Humans are still needed to work around robots, doing the jobs the machines can’t and fixing them when they get stuck. But this is all set to change thanks to a new wave of smarter, better value machines that can adapt to multiple tasks. This change will be so significant that it will create a new industrial revolution.
This era of “Industry 4.0” is being driven by the same technological advances that enable the capabilities of the smartphones in our pockets. It is a mix of low-cost and high-power computers, high-speed communication and artificial intelligence. This will produce smarter robots with better sensing and communication abilities that can adapt to different tasks, and even coordinate their work to meet demand without the input of humans.
In the manufacturing industry, where robots have arguably made the most headway of any sector, this will mean a dramatic shift from centralised to decentralised collaborative production. Traditional robots focused on single, fixed, high-speed operations and required a highly skilled human workforce to operate and maintain them. Industry 4.0 machines are flexible, collaborative and can operate more independently, which ultimately removes the need for a highly skilled workforce.
For large-scale manufacturers, Industry 4.0 means their robots will be able to sense their environment and communicate in an industrial network that can be run and monitored remotely. Each machine will produce large amounts of data that can be collectively studied using what is known as “big data” analysis. This will help identify ways to improve operating performance and production quality across the whole plant, for example by better predicting when maintenance is needed and automatically scheduling it.
For small-to-medium manufacturing businesses, Industry 4.0 will make it cheaper and easier to use robots. It will create machines that can be reconfigured to perform multiple jobs and adjusted to work on a more diverse product range and different production volumes. This sector is already beginning to benefit from reconfigurable robots designed to collaborate with human workers and analyse their own work to look for improvements, such as BAXTER, SR-TEX and CareSelect.
While these machines are getting smarter, they are still not as smart as us. Today’s industrial artificial intelligence operates at a narrow level, which gives the appearance of human intelligence exhibited by machines, but designed by humans.
What’s coming next is known as “deep learning”. Similar to big data analysis, it involves processing large quantities of data in real time to make decisions about what is the best action to take. The difference is that the machine learns from the data so it can improve its decision making. A perfect example of deep learning was demonstrated by Google’s AlphaGo software, which taught itself to beat the world’s greatest Go players.
The turning point in applying artificial intelligence to manufacturing could come with the application of special microchips called graphical processing units (GPUs). These enable deep learning to be applied to extremely large data sets at extremely fast speeds. But there is still some way to go and big industrial companies are recruiting vast numbers of scientists to further develop the technology.
Impact on industry
As Industry 4.0 technology becomes smarter and more widely available, manufacturers of any size will be able to deploy cost-effective, multipurpose and collaborative machines as standard. This will lead to industrial growth and market competitiveness, with a greater understanding of production processes leading to new high-quality products and digital services.
Exactly what impact a smarter robotic workforce with the potential to operate on its own will have on the manufacturing industry, is still widely disputed. Artificial intelligence as we know it from science fiction is still in its infancy. It could well be the 22nd century before robots really have the potential to make human labour obsolete by developing not just deep learning but true artificial understanding that mimics human thinking.
Ideally, Industry 4.0 will enable human workers to achieve more in their jobs by removing repetitive tasks and giving them better robotic tools. In theory, this would allow us humans to focus more on business development, creativity and science, which it would be much harder for any robot to do. Technology that has made humans redundant in the past has forced us to adapt, generally with more education.
But because Industry 4.0 robots will be able to operate largely on their own, we might see much greater human redundancy from manufacturing jobs without other sectors being able to create enough new work. Then we might see more political moves to protect human labour, such as taxing robots.
Again, in an ideal scenario, humans may be able to focus on doing the things that make us human, perhaps fuelled by a basic income generated from robotic work. Ultimately, it will be up to us to define whether the robotic workforce will work for us, with us, or against us.
Adriana Schulz, an MIT PhD student in the Computer Science and Artificial Intelligence Laboratory, demonstrates the InstantCAD computer-aided-design-optimizing interface. Photo: Rachel Gordon/MIT CSAIL
Almost every object we use is developed with computer-aided design (CAD). Ironically, while CAD programs are good for creating designs, using them is actually very difficult and time-consuming if you’re trying to improve an existing design to make the most optimal product. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Columbia University are trying to make the process faster and easier: In a new paper, they’ve developed InstantCAD, a tool that lets designers interactively edit, improve, and optimize CAD models using a more streamlined and intuitive workflow.
InstantCAD integrates seamlessly with existing CAD programs as a plug-in, meaning that designers don’t have to learn new tools to use it.
“From more ergonomic desks to higher-performance cars, this is really about creating better products in less time,” says Department of Electrical Engineering and Computer Science PhD student and lead author Adriana Schulz, who will be presenting the paper at this month’s SIGGRAPH computer-graphics conference in Los Angeles. “We think this could be a real game changer for automakers and other companies that want to be able to test and improve complex designs in a matter of seconds to minutes, instead of hours to days.”
The paper was co-written by Associate Professor Wojciech Matusik, PhD student Jie Xu, and postdoc Bo Zhu of CSAIL, as well as Associate Professor Eitan Grinspun and Assistant Professor Changxi Zheng of Columbia University.
Traditional CAD systems are “parametric,” which means that when engineers design models, they can change properties like shape and size (“parameters”) based on different priorities. For example, when designing a wind turbine you might have to make trade-offs between how much airflow you can get versus how much energy it will generate.
InstantCAD enables designers to interactively edit, improve, and optimize CAD models using a more streamlined and intuitive workflow. Photo: Rachel Gordon/MIT CSAIL
However, it can be difficult to determine the absolute best design for what you want your object to do, because there are many different options for modifying the design. On top of that, the process is time-consuming because changing a single property means having to wait to regenerate the new design, run a simulation, see the result, and then figure out what to do next.
With InstantCAD, the process of improving and optimizing the design can be done in real-time, saving engineers days or weeks. After an object is designed in a commercial CAD program, it is sent to a cloud platform where multiple geometric evaluations and simulations are run at the same time.
With this precomputed data, you can instantly improve and optimize the design in two ways. With “interactive exploration,” a user interface provides real-time feedback on how design changes will affect performance, like how the shape of a plane wing impacts air pressure distribution. With “automatic optimization,” you simply tell the system to give you a design with specific characteristics, like a drone that’s as lightweight as possible while still being able to carry the maximum amount of weight.
The reason it’s hard to optimize an object’s design is because of the massive size of the design space (the number of possible design options).
“It’s too data-intensive to compute every single point, so we have to come up with a way to predict any point in this space from just a small number of sampled data points,” says Schulz. “This is called ‘interpolation,’ and our key technical contribution is a new algorithm we developed to take these samples and estimate points in the space.”
Matusik says InstantCAD could be particularly helpful for more intricate designs for objects like cars, planes, and robots, particularly for industries like car manufacturing that care a lot about squeezing every little bit of performance out of a product.
“Our system doesn’t just save you time for changing designs, but has the potential to dramatically improve the quality of the products themselves,” says Matusik. “The more complex your design gets, the more important this kind of a tool can be.”
Because of the system’s productivity boosts and CAD integration, Schulz is confident that it will have immediate applications for industry. Down the line, she hopes that InstantCAD can also help lower the barrier for entry for casual users.
“In a world where 3-D printing and industrial robotics are making manufacturing more accessible, we need systems that make the actual design process more accessible, too,” Schulz says. “With systems like this that make it easier to customize objects to meet your specific needs, we hope to be paving the way to a new age of personal manufacturing and DIY design.”
The project was supported by the National Science Foundation.
The K5 security robot fell into a fountain in Washington, D.C.
July 17, 2017 – July 23, 2017
If you would like to receive the Weekly Roundup in your inbox, please subscribe at the bottom of the page.
News
A U.S. drone strike in Afghanistan is reported to have mistakenly killed 15 Afghan soldiers. In a statement, Afghanistan’s Ministry of Defense reported that the strike hit a security outpost in Helmand province. (Voice of America)
The U.K. Department of Transport is developing regulations that would implement a drone registration program, safety courses for drone owners, and more extensive geo-fencing to keep drones out of restricted areas. According to the BBC, it is not yet clear when the new rules will go into effect.
China announced plans to advance the development of artificial intelligence. The State Council of the People’s Republic of China released a plan to grow AI-related industries into a $59.07 billion sector by 2025. (Reuters)
Canada’s transportation safety agency issued an update to its drone regulations. The update relaxes key provisions for recreational and commercial drone users. (CTV News)
An Australian student has developed a drone that is capable of flying for longer and at much higher speeds than other consumer systems. (ABC News)
Airbus Defense and Space conducted a test flight of a subscale model of the Sagitta stealth drone that it is developing with a group of German research institutes. (Aviation Week)
YouTube channel Make it Extreme published a video showing how one can build a DIY counter-drone net gun. (Popular Mechanics)
Estonian Startup Marduk Technologies plans to begin testing its Shark counter-drone system with the Estonian military in August or September. (IHS Jane’s International Defence Review)
The U.S. Navy issued a draft Request for Proposals detailing some of the characteristics of its planned MQ-25A Stingray refueling drone. (USNI News)
Police departments in Dorset, Cornwall, and Devon in the U.K. have been using drones to track reckless motorcyclists. (The Drive)
The town of Deadwood in South Dakota has approved an ordinance restricting the use of drones in the city. (Black Hills Pioneer)
Investigators have concluded that a mid-air collision in Australia that was thought to have been caused by a drone was actually caused by a bat. (ABC News)
A drone flying near the scene of a car crash in Avonport, Canada delayed the departure of a helicopter that was airlifting a patient to hospital. (CBC)
Canada’s OEX Recovery group will use a Kraken unmanned undersea vehicle to search for the remains of several subscale prototype jets that crashed into Lake Ontario in the 1950s. (Unmanned Systems Technology)
Solent Local Enterprise Partnership awarded BAE Systems a $593,871 grant to design a testing site for autonomous systems in the U.K. (Inside Unmanned Systems)
Iran Aircraft Manufacturing Industries announced that it will begin marketing the Hamaseh surveillance and strike drone to international customers. (FlightGlobal)