In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda).
Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features BADGER, a Robot for Autonomous Underground Trenchless Operations, Mapping and Navigation.
Objectives
The goal of the proposed project is the design and development of an integrated underground robotic system capable for autonomous construction of subterranean small-diameter, highly curved tunnel networks and for localization, mapping and autonomous navigation during its operation. The proposed robotic system will enable the execution of tasks in different application domains of high societal and economic impact including trenchless constructions (cabling and piping) installations, etc., search and rescue operations, remote science and exploration applications.
Expected impact
The expected strategic impact of BADGER project focuses in:
Introduce advanced robotics technologies, including intelligent control and cognition capabilities, to significantly increase the European competitiveness,
Drastically reduce the traffic congestion and pollution in the European urban environments increasing, in this way, the quality of life of people,
Enabling technologies for new potential applications: search and rescue, mining and quarrying, civil applications, mapping, etc.
The Robot Academy is a new learning resource from Professor Peter Corke and the Queensland University of Technology (QUT), the team behind the award-winning Introduction to Robotics and Robotic Vision courses. There are over 200 lessons available, all for free.
Educators are encouraged to use the Academy content to support teaching and learning in class or set them as flipped learning tasks. You can easily create viewing lists with links to lessons or masterclasses. Under Resources, you can download a Robotics Toolbox and Machine Vision Toolbox, which are useful for simulating classical arm-type robotics, such as kinematics, dynamics, and trajectory generation.
The lessons were created in 2015 for the Introduction to Robotics and Robotic Vision courses. We describe our approach to creating the original courses in the article, An Innovative Educational Change: Massive Open Online Courses in Robotics and Robotic Vision. The courses were designed for university undergraduate students but many lessons are suitable for anybody, see the difficulty rating on each lesson.
Under Masterclasses, students can choose a subject and watch a set of videos related to that particular topic. Single lessons can offer a short training segment or a refresher. Three online courses, Introducing Robotics, are also offered.
Below are two examples of the single-course and masterclasses. We encourage everyone to take a look at the QUT Robot Academy by visiting our website.
Single Lesson
Out and about with robots
In this video, we look at a diverse range of real-world robots and discuss what they do and how they do it.
Masterclass
Robot joint control: Introduction (Video 1 of 12)
In this video, students learn how we make robot joints move to the angles or positions that are required to achieve the desired end-effector motion. This is the job of the robot’s joint controller. In the lecture, we will take discuss the realms of control theory.
Robot joint control: Architecture (video 2 of 12)
In this lecture, we discuss a robot joint is a mechatronic system comprising motors, sensors, electronics and embedded computing that implements a feedback control system.
Robot joint control: Actuators (video 3 of 12)
Actuators are the components that actually move the robot’s joint. So, let’s look at a few different actuation technologies that are used in robots.
To watch the rest of the video series, visit their website.
If you enjoyed this article, you may also want to read:
Here’s a video you will want to watch. “The Wolf: The Hunt Continues” is really an ad showing how a hacker can enter a network through an unprotected printer (or robot). Christian Slater stars as the evil hacker.
“There are hundreds of millions of business printers in the world and less than 2% are secure,” said Vikrant Batra, Global Head of Marketing for Printing & Imaging, HP. “Everyone knows that a PC can be hacked, but not a printer.” [Hence the need to inform about how easily a printer can be hacked and the consequences of that.]
Although not related to the recent WannaCry hack which held hundreds of thousands of companies ransom and downloaded millions of personal records before destroying billions more, HP, this 7-minute terrifying advertisement for securing inconsequential devices dramatizes what can happen if we don’t stay a step ahead of the threats that are out there waiting to happen. As companies attempt to stream and analyze data from their Internet of Things (IoT) sensors and software and from varied pieces of equipment and sensors throughout their facilities, opportunities such as the one described in the HP video will certainly happen.
One that comes to mind is FANUC’s plan to network all its CNCs, robots and peripheral devices and sensors used in automation systems with the goal of optimizing up-time, maintenance schedules and manufacturing profitability. FANUC is collaborating with Cisco, Rockwell and Preferred Networks to craft a secure system which they’ve named FIELD. Let’s hope it works.
Fortune Magazine recently reported about consumer products that spy on their users by companies attempting to learn new business models based on data:
What do a doll, a popular set of headphones, and a sex toy have in common? All three items allegedly spied on consumers, creating legal trouble for their manufacturers.
In the case of We-Vibe, which sells remote-control vibrators, the company agreed to pay $3.75 million in March to settle a class-action suit alleging that it used its app to secretly collect information about how customers used its products. The audio company Bose, meanwhile, is being sued for surreptitiously compiling data—including users’ music-listening histories—from headphones.
For consumers, such incidents can be unnerving. Almost any Internet-connected device—not just phones and computers—can collect data. It’s one thing to know that Google is tracking your queries, but quite another to know that mundane personal possessions may be surveilling you too.
So what’s driving the spate of spying? The development of ever-smaller microchips and wireless radios certainly makes it easy for companies. As the margins on consumer electronics grow ever thinner, you can’t blame companies for investigating new business models based on data, not just on devices.
Robotera: Is the fusion of robots (robo) and temples (tera, in Japanese). Buddhist temples in Japan are offering robotic classes to foster interest in future generations in Buddhist culture and temples.
As well as offering traditional activities—such as tea ceremonies, lectures, reading and crafting—temples in Japan have started working with the robot academies franchise Robo Done to increase their educational offerings and teach robotics to kids, ages 6 years and up.
These activities have been a great hit, drawing numerous kids and their parents to the temples. As a result, the number of temples that are offering robotic classes is increasing exponentially. From Daigoji temple in Kyoto to Shinagawaji temple in Tokyo, these innovative programming courses are helping a new generation to approach the traditional culture while promoting STEM education and robotics. The old customs are innovating to be closer to the 21st-century, without losing their roots. This is a clear example of the two faces of Japan: tradition and future.
Click here to read more at Robotera, or read more about Robo Done School here.
The Xponential 2017 national conference was held May 8-11 by the Association for Unmanned Vehicle Systems International (AUVSI) in the Kay Bailey Hutchison Convention Center in Dallas, Texas. The event took place in the largest exhibit hall ever dedicated to unmanned systems and robotics, with over 370,000 square feet. It featured over 650 robotics organizations – companies, research institutions, universities, consultants, nonprofits and more – from the U.S. and countries worldwide.
Here’s a sample of images from the world’s largesttradeshow for unmanned systems. All images by Lucien Miller, CEO of innov8tivedesigns.com.
SoftBank, the giant telecom company, is venturing out into the world of robotics and transportation services. DealStreet Asia said that SoftBank is trying to transform itself into the ‘Berkshire Hathaway of the tech industry’ with the recent launch of a $100 billion technology fund.
UPDATED 5/24/17: SoftBank’s acquisition of 4.9% of the outstanding shares of Nvidia Corp.
First SoftBank bought Aldebaran, the maker of the Nao and Romeo robots, and redirected them to produce the Pepper robot which has been sold in the thousands to businesses as a guide, information source and order taker, then bigger partnerships with Foxconn and Alibaba to manufacture and market Pepper and other consumer products, and most recently to establishing the $100 billion technology fund.
Recognizing that the telecom services market has matured, SoftBank is putting their money where they can to participate in the new worlds of robotics and transportation as a service. $5 billion in Didi Chuxing, China’s largest ride-sharing company, is a perfect example.
Didi Chuxing
Didi, which already serves more than 400 million users across China, provides services including taxi hailing, private car-hailing, Hitch (social ride-sharing), DiDi Chauffeur, DiDi Bus, DiDi Test Drive, DiDi Car Rental and DiDi Enterprise Solutions to users in China via a smartphone application.
Tencent, Baidu and Alibaba are big investors — even Apple invested $1 billion.
The transformation of the auto industry into one focused on providing transportation services is a moving target with much news, talent movement, investment and widely-varying forecasts. But all signs show that it is booming and growing.
For more information on this subject, read the views of Chris Urmson, previous CTO of Google’s self-driving car group, in my article entitled: Transportation as a Service: a look ahead.
SoftBank Group Corp. acquired a $4 billion stake in Nvidia Corp. making it the fourth-largest shareholder of the graphics chipmaker.
Nvidia
Nvidia, a gaming chipmaker, has been receiving a lot of media attention for their GPU deep learning AI which they call ‘the next era of computing’ — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world around their sensors.
Nvidia recently introduced the NVIDIA Isaac™ robot simulator, which utilizes sophisticated video-game and graphics technologies to train intelligent machines in simulated real-world conditions before they get deployed. The company also introduced a set of robot reference-design platforms that make it faster to build such machines using the NVIDIA Jetson™ platform.
“Robots based on artificial intelligence hold enormous promise for improving our lives, but building and training them has posed significant challenges. NVIDIA is now revolutionizing the robotics industry by applying our deep expertise in simulating the real world so that robots can be trained more precisely, more safely and more rapidly.”
The Middle East and North Africa’s youthful, fast-urbanizing population are perfectly placed to embrace technology and reap the rewards of the Fourth Industrial Revolution.
Much has been written already about the arrival of the Fourth Industrial Revolution (4IR) and the opportunity that the convergence of its new technologies offers in terms of building value into production systems and economies around the world. In one sense, the playing field could be levelled out. Localized production is being made more feasible for many small producers, setting developing communities on a path towards self-sufficiency, while falling costs could enable factories of all sizes to boost their productivity levels.
However, on the opposite side of the equation, news headlines have been dominated by predictions that human workers will be substituted by robots, leading to widespread job losses and heightened societal challenges. Additionally, doubt has been shed on the ability of regions that are less industrialized, or those with fractured economies and infrastructure, to be able to respond to these disruptions and compete effectively in the future.
For the Middle East and North Africa, it’s a critical question. Clearly, the region contains a mixture of countries in very different situations, ranging from those with active conflicts, challenged societal cohesion and decreasing incomes from natural resource reserves, to thriving, inclusive, relatively advanced economies.
However, the collision of some of the region’s characteristic megatrends with the 4IR phenomenon actually positions it to take a leading role in adopting and leveraging new technologies. Here are some examples:
A rapidly growing, young, tech-savvy population and the role of augmented reality/virtual reality (AR/VR) and wearables. More than 40% of people across the Middle East and North Africa are under the age of 25, and population growth is second only to sub-Saharan Africa. This growth is set to continue, with the total population forecast to reach 700 million by 2050. While clearly this indicates an urgent need to create jobs and build new capabilities, a new generation of millennial workers who have grown up with technology at their fingertips are arguably more likely to adapt to the needs of the new production age. For example, companies could use augmented reality to conduct “hyper-training” for employees, resulting in increased engagement, dramatically faster training times and a more capable workforce.
Urbanization, new infrastructure development and the internet of things (IoT). Around 263 million people – or 62% of the region’s population – are city-dwellers, and this urban populace is expected to double by 2040. This means more construction, and more opportunities to embed IoT devices into current and future builds, traffic management, energy management and other smart systems, to help the region take a leap forward. Cities around the world have begun to harness the power of digital connectivity. Barcelona uses smart lamp posts that sense pedestrians to adjust lighting, sample air quality and share information with city agencies and the public. Singapore uses smart bus fleets that identify issues and significantly reduce crowding and wait times. Dubai has installed new traffic signals that spot the movement of pedestrians and automatically modify the signal timing to encourage more people to walk and help reduce accidents.
Economic diversification, productivity and the role of robots. Volatile oil prices have placed strain on the Middle East’s oil-exporting countries, resulting in a redoubled focus on economic diversification. All of the Gulf Cooperation Council (GCC) countries have designed long-term plans to increase their diversification, attract investment, grow the SME sector and private-sector jobs, and increase GDP and exports. Converging new technologies offer an alternative to traditional routes to development, and make it possible for countries to enter new industry sectors with relative ease.
Examples include “speed factories” that use robots and 3D printing to rapidly produce customized goods, which can help accelerate localization of production. This automation can multiply productivity and enable countries in the region to enter new markets where they could not compete before, such as aerospace-parts manufacturing. Furthermore, the use of automation and autonomy for cargo and passenger movement could increase the region’s competitive advantage.
Of course, there are undoubtedly challenges for the region to overcome as it builds out a new future, all of which will be key in alleviating the foundational reasons of regional security conflicts. Three that are central to unleashing the transformational growth of the Fourth Industrial Revolution are:
Building the right capabilities. Some regions are behind on the path towards industrialization, and it is acknowledged that students and professionals must be better equipped with the relevant skills – not only in STEM subjects, but in the intrinsically human skills that will be in demand more than ever before as automation alters the role for humans in the workplace.
Supportive governance, regulation and policies. This comes down to governments adopting more progressive and inclusive frameworks that encourage demand, stimulate investment, boost enterprise development, reduce corruption and redress the imbalances of historical exclusion in some of its societies.
Pan-regional integration. Finally, in a world where regions are beginning to look inwards, the Middle East and North Africa can stand to gain from improving their regional economic exchanges. Economic integration in the region is among the lowest globally. Increased intra-regional mobility of goods, capital and workforce would enable it to boost economic growth and better cope with the disruptions of 4IR.
I am in no doubt that the Middle East and North Africa stand at the threshold of an enormous opportunity as the digital revolution beckons. I believe that it has the capacity to seize that opportunity and to continue its path towards a highly productive and more integrated future.
Read the original article on the WEF website here.
RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.
To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.
This week, we take a look at the heart-pumping excitement watching the popular RoboCup soccer leagues. You’ll hear about the history and ambitions of RoboCup from the trustees, and inspiring teams from around the world.
Short version:
Long video:
Can’t wait to watch the rest? You can view all the videos on the RoboCup playlist below:
The way animals move has yet to be matched by robotic systems. This is because biological systems exploit compliant mechanisms in ways their robotic cousins do not. However, recent advances in soft robotics aim to address these issues.
The stereotypical robot gait consists of jerky, uncoordinated and unstable movements. This is in stark contrast to the graceful, fluid movements seen throughout the animal kingdom. In part, this is because robots are built of rigid often metallic links, unlike their biological cousins which use a mixture of hard and soft materials.
A key challenge for soft and hybrid hard-soft robotics is control. The use of rigid materials means it is possible to build relatively simple mathematical models of the system, which can be used to find control laws. Without rigid materials, the model balloons in complexity, leading to systems which cannot feasibly be used to derive control laws.
One possible approach is to offload some of the responsibility for producing a behaviour from the controller to the body of the system. Some examples of systems that take this approach include the Lipson-Jaeger gripper, various dynamic walkers, and the 1-DOF swimming robot Wanda. This offloading of responsibility from the controller to the body is called ‘Morphological Computation.’
There are a number of mechanisms in which the body can reduce the burden placed on a controller: it can limit the range of available behaviours, reducing the need for a complex model of the system[1] structure the sensory data in a way that simplifies state observation[2], or it can be used as an explicit computational resource.
The idea of an arm or leg as a computer may strike people as odd. The clean lines of a Macbook look much different than our wobbly appendages. In what sense, then, is it reasonable to claim that such systems can be used as computers?
Computational capacity
To make the concept clear, it is necessary to first introduce the concept of computational capacity. In a digital computer, transistors compute logical operations such as AND and XOR, but computation is possible via any system which can take inputs to outputs.
For example, we could use a system which classifies numbers as either positive or negative as a basic building block. More complex computations can be achieved by connecting these basic units together.
Of course, not all choices of mapping are equally useful. In some cases, computations that we wish to carry out may not be possible if we have chosen the wrong building block. The computational capacity of a system is a measure of how complex a calculation we can perform by combining our basic computational unit.
To make this more concrete, it is helpful to introduce perceptrons: the forebears of today’s deep learning systems. A perceptron takes a set number of inputs and produces either a 1 or 0 as an output. To calculate the output for a given input, we follow a simple process:
Multiply each input x_i by its corresponding weight w_i
Sum the weighted inputs
Output 1 if the sum is above a threshold and output 0 otherwise.
In order to make a perceptron perform a specific computation, we have to find the corresponding set of weights.
If we consider the perceptron as our basic building block for computation, can we ask what kinds of computation we can perform?
It has been shown that perceptrons are limited to the computation of linear functions; a perceptron cannot, for example, correctly learn to compute the XOR function.[3]
To overcome this limitation, it is necessary to expand the complexity of the computational system. In the case of the perceptron, we can add “hidden” layers, turning the perceptron into a neural network. Neural networks with enough hidden layers are in theory capable of computing almost any function which depends only upon its input at the current time.[4]
If we desire a system that can perform computations that depend on prior inputs (i.e. have memory), then we need to add recurrent connections, which connect a node to itself. Recurrent neural networks have been shown to be Turing complete, which means they could, in theory, be used as a basis for a general purpose computer. [5]
FIGURE (FFNN and RNN)
These considerations give us a set of criteria by which we can begin to assess the computational capacity of a system — we can classify the computational capacity by the amount of non-linearity and memory that a system has.
The capacity of bodies
Returning to the explicit use of a body as a computational structure, we can begin to consider the capacity of a body. To start with, we need to define an input and output for our computer.
In most cases, the body of a robot will contain both actuators and sensors. If we limit ourselves to systems with a single actuator, then the natural definitions would be to take the actuator as the input and the sensors as the output. To simplify the output, we can take a weighted sum of the sensor readings; we know the limitations of the perceptron that this will not add either memory or non-linearity to our system.
FIGURES (OCTOPUS SYSTEM)
Given such a setup, we can then test the ability of the system to perform certain computations. What kind of computations can this system perform?
To answer this question, Nakajima et al.[6] used a silicone arm, inspired by an octopus tentacle. The arm was actuated by a single servo motor and contained a number of stretch sensors embedded within it.
Amazingly, this system was shown to be capable of computing a number of functions that required both non-linearity and memory. For example, the arm was shown to be capable of acting as a parity bit checker. Given a sequence of inputs (either 1 or 0), the system would output a 1 if there was an even number of 1s in the input and 0 otherwise. Such a task requires both memory of prior inputs and non-linearity. As the readout cannot add either of these, we must conclude that the body itself has added this capacity; in other words, the body contributes computational capacity to the system.
Prospects and outlook
A number of systems which exploit the explicit computational capacity of the body have already been built. For example, the Kitty robot [7] uses its compliant spine to compute a control signal. Different behaviours can be achieved by adjusting the weights of the readout, and the controller is robust to certain perturbations.
As a next step, we are investigating the role of adaptive bodies. Many biological systems are capable of not only changing their control but also adjusting the properties of their bodies. Finding the right morphology can simplify control as discussed in the introduction. However, it will also affect the computational capacity of the body. We are investigating the connection between the computational capacity of a body and its behaviour.
[1] Tedrake, Russ, Teresa Weirui Zhang, and H. Sebastian Seung. “Learning to walk in 20 minutes.” Proceedings of the Fourteenth Yale Workshop on Adaptive and Learning Systems. Vol. 95585. 2005.
[2] Lichtensteiger, Lukas, and Peter Eggenberger. “Evolving the morphology of a compound eye on a robot.” Advanced Mobile Robots, 1999.(Eurobot’99) 1999 Third European Workshop on. IEEE, 1999.
[3] Minsky, Marvin, and Seymour Papert. “Perceptrons.” (1969).
[5] Siegelmann, Hava T., and Eduardo D. Sontag. “On the computational power of neural nets.” Journal of computer and system sciences 50.1 (1995): 132-150.
[6] Nakajima, Kohei, et al. “Exploiting short-term memory in soft body dynamics as a computational resource.” Journal of The Royal Society Interface 11.100 (2014): 20140437.
[7] Zhao, Qian, et al. “Spine dynamics as a computational resource in spine-driven quadruped locomotion.” Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013.
Drone company Atmos UAV has launched Marlyn, a lightweight drone which flies automatically, effortlessly and at high wind speeds. One of the first customers that signed up is Skeye, Europe’s leading unmanned aircraft data provider. This new technology allows industry professionals around the world to map the surface 10 times faster and guarantees no more drone crashes.
“With her unique properties, Marlyn allows us to tackle even our most challenging jobs,” says Pieter Franken, co-founder of Skeye, one of Europe’s leading unmanned aircraft data providers. He continues: “We expect time savings of up to 50% and moreover save a huge amount of our resources and equipment.” Marlyn can cover 1 km² in half an hour with a ground sampling distance of 3 cm.
“We are very excited to work together with Skeye and to have the opportunity to implement their operational expertise in this promising project,” Sander Hulsman, CEO of Atmos UAV adds. “Marlyn is all about making aerial data collection safer and more efficient, allowing professional users across all industries to access the skies, enabling them to focus more on analysing the actual information and improving their business effectiveness.”
Mapping made easy
With Marlyn, mapping jobs consist of four easy steps. First, a flight plan is generated based on the required accuracy and the specified project area. Secondly, the drone starts its flight and data collection by a simple push of a button. Thirdly, after Marlyn has landed at the designated spot, the captured data is automatically organised and processed by image processing software of choice. Finally, a detailed analysis can be done to provide actionable insights.
About Atmos UAV
Atmos UAV is a high-tech start-up that designs and manufactures reliable aerial observation and data gathering solutions for professional users. It all originated from a project at Delft University of Technology. With the support of its faculty of Aerospace Engineering, it grew into the fast-growing spin-off company Atmos UAV. The company specializes in land surveying, mining, precision agriculture, forestry and other mapping related applications. Atmos UAV is currently hiring to accommodate its rapid expansion.
In 1985, a twenty-two year old Garry Kasparov became the youngest World Chess Champion. Twelve years later, he was defeated by the only player capable of challenging the grandmaster, IBM’s Deep Blue. That same year (1997), RoboCup was formed to take on the world’s most popular game, soccer, with robots. Twenty years later, we are on the threshold of the accomplishing the biggest feat in machine intelligence, a team of fully autonomous humanoids beating human players at FIFA World Cup soccer.
Many of the advances that have led to the influx of modern autonomous vehicles and machine intelligence are the result of decades of competitions. While Deep Blue and AlphaGo have beat the world’s best players at board games, soccer requires real-world complexities (see chart) in order to best humans on the field. This requires RoboCup teams to combine a number of mechatronic technologies within a humanoid device, such as real-time sensor fusion, reactive behavior, strategy acquisition, deep learning, real-time planning, multi-agent systems, context recognition, vision, strategic decision-making, motor control, and intelligent robot control.
Professor Daniel Lee of University of Pennsylvania’s GRASP Lab described the RoboCup challenges best, “Why is it that we have machines that can beat us in chess or Jeopardy but we can beat them in soccer? What makes it so difficult to embody intelligence into the physical world?” Lee explains, “It’s not just the soccer domain. It’s really thinking about artificial intelligence, robotics, and what they can do in a more general context.”
RoboCup has become so important that the challenge of soccer has now expanded into new leagues that focus on many commercial endeavors from social robotics, to search & rescue, to industrial applications. These leagues have a number of subcategories of competition with varying degrees of difficulty. In less than two months, international teams will convene in Nagoya, Japan for the twenty-first games. As a preview of what to expect, let’s review some of last year’s winners. And just maybe it could give us a peek at the future of automation.
RoboCup Soccer
While Iran’s human soccer team is 28th in the world, their robot counterparts (Baset Pazhuh Tehran) won 1st place in the AdultSize Humanoid competition. Baset’s secret sauce is its proprietary algorithms for motion control, perception, and path planning. According to Baset’s team description paper, the key was building a “fast and stable walk engine” based upon the success of past competitions. This walk engine was able to combine “all actuators’ data in each joint, and changing the inverse and forward kinematics” to “avoid external forces affecting robot’s stability, this feature plays an important role to keep the robot standing when colliding to the other robots or obstacles.” Another big factor was their goalkeeper that used a stereo vision sensor to detect incoming plays and win the competition by having “better percepts of goal poles, obstacles, and the opponent’s goalkeeper. To locate each object in real self-coordinating system.” The team is part of a larger Iranian corporation, Baset, that could deploy this perception in the field. Baset’s oil and gas clients could benefit from better localization techniques and object recognition for pipeline inspections and autonomous work vehicles. If by 2050 RoboCup’s humanoids will be capable of playing humans in soccer, one has to wonder if Baset’s mechanical players will spend their offseason working in the Arabian peninsula?
RoboCup Rescue
In 2001 the RoboCup organization added simulated rescue to the course challenge, paving the way for many life-saving innovations already being embraced by first responders. The course starts with a simulated earthquake environment whereby the robot performs a search and rescue mission lasting 20 minutes. The skills are graded by overcoming a number of obstacles that are designed to assess the robot’s autonomous operation, mobility, and object manipulation. Points are given by the number of victims found by the robot, details about the victims, and the quality of the area mapped. In 2016, students from the King Mongkut’s University of Technology North Bangkok won first place with their Invigorating Robot Activity Project (or iRAP).
Similar to Baset, iRap’s success is largely based upon solving problems from previous contests where they placed consistently in the top tier. The team had a total of four robots: one autonomous robot, two tele-operative robots, and one aerial drone. Each of the robots had multiple sensors related to providing critical data, such as CO2 levels, temperature, positioning, 2D mapping, images, and two-way communications. iRap’s devices managed to navigate with remarkable ease the test environment’s rough surfaces, hard terrains, rolling floor, stairs, and inclined floor. The most impressive performer was the caged quadcopter using enhanced sensors to localize itself within an outdoor search perimeter. According to the team’s description paper, “we have developed the autonomously outdoor robot that is the aerial robot. It can fly and localize itself by GPS sensor. Besides, the essential sensors for searching the victim.” It is interesting to note that the Thai team’s design was remarkably similar to Flyability’s Gimball that won first place in the UAE’s 2015 Drones for Good Competition. Like the RoboCup winner, the Gimball was designed specifically for search & rescue missions using a lightweight carbon fiber cage.
As RoboCup contestants push the envelope of navigation mapping technologies, it is quite possible that the 2017 fleet could develop subterranean devices that could actively find victims within minutes of being buried by the earth.
RoboCup @Home
The home, like soccer, is one of the most chaotic conditions for robots to operate successfully. It is also one of the biggest areas of interest for consumers. Last year, RoboCup @Home celebrated its 10th anniversary by bestowing the top accolade to Team-Bielefeld (ToBI) of Germany. ToBi built a humanoid-like robot that was capable of learning new skills through natural language within unknown environments. According to the team’s paper, “the challenge is two-fold. On the one hand, we need to understand the communicative cues of humans and how they interpret robotic behavior. On the other hand, we need to provide technology that is able to perceive the environment, detect and recognize humans, navigate in changing environments, localize and manipulate objects, initiate and understand a spoken dialogue and analyse the different scenes to gain a better understanding of the surrounding.” In order to achieve these ambitious objectives the team created a Cognitive Interaction Toolkit (CITK) to support an “aggregation of required system artifacts, an automated software build and deployment, as well as an automated testing environment.”
Infused with its proprietary software the team’s primary robot, the Meka M1 Mobile Manipulator (left) demonstrated the latest developments in human-robot-interactions within the domestic setting. The team showcased how the Meka was able to open previously shut doors, navigate safely around a person blocking its way, and recognize and grasp many household objects. According to the team, “the robot skills proved to be very effective for designing determined tasks, including more script-like tasks, e.g. ’Follow-Me’ or ’Who-is-Who’, as well as more flexible tasks including planning and dialogue aspects, e.g. ’General-PurposeService-Robot’ or ’Open-Challenge’.”
RoboCup @Work
The @Work category debuted in 2016 with the German team from the Leibniz Universität Hannover (LUHbots)winning first place. While LUHbots’ hardware was mostly off the shelf parts (a mobile robot KUKA youBot), the software utilized a number of proprietary algorithms. According to the paper, “in the RoboCup we use this software e.g. to grab objects using inverse kinematics, to optimize trajectories and to create fast and smooth movements with the manipulator. Besides the usability the main improvements are the graph based planning approach and the higher control frequency of the base and the manipulator.” The key to using this approach within a factory setting is its robust object recognition. The paper explains, “the robot measures the speed and position of the object. It calculates the point and time where the object reaches the task place. The arm moves above the calculated point. Waits for the object and accelerates until the arm is directly above the moving-object with the same speed. Overlapping the down movement with the current speed until gripping the object. The advantage of this approach is that while the calculated position and speed are correct every orientation and much higher objects can be gripped.”
Similar to other finalists, LUHbots’ object recognition software became the determining factor to its success. RoboCup’s goal of playing WorldCup soccer with robots may seem trivial, but its practice is anything but meaningless. In each category, the advances developed on the field of competitive science are paying real dividends on a global scale across many industries.
In the words of the RoboCup mission statement: “The ultimate goal is to ‘develop a robot soccer team which beats the human world champion team.’ (A more modest goal is ‘to develop a robot soccer team which plays like human players.’) Needless to say, the accomplishment of the ultimate goal will take decades of effort. It is not feasible with current technologies to accomplish this in the near future. However, this goal can easily lead to a series of well-directed subgoals. Such an approach is common in any ambitious or overly ambitious project. In the case of the American space program, the Mercury project and the Gemini project, which manned an orbital mission, were two precursors to the Apollo mission. The first subgoal to be accomplished in RoboCup is ‘to build real and software robot soccer teams which play reasonably well with modified rules.’ Even to accomplish this goal will undoubtedly generate technologies, which will impact a broad range of industries.”
* Editor’s note: thank you to Robohub for providing a twenty-year history of RoboCup videos.
A U.S. Court of Appeals has struck down the Federal Aviation Administration’s drone registration requirement for recreational operators. The federal court ruled that the registration system violates the 2012 FAA Modernization and Reform Act, which does not grant the FAA the authority to regulate hobby aircraft. More than 820,000 people have registered their drones with the FAA since the requirement was implemented in 2015. (Recode)
In a statement, Senator John Hoeven (R-ND) urges the Department of Defense to develop an air traffic management system for drones. (Press Release)
In a speech at the Special Operations Forces Industry Conference, General Raymond Thomas said that armed ISIS drones were the “most daunting problem” of the past year. (DefenseNews)
At the Indianapolis Star, Ryan Martin writes that police departments in Indiana are pushing back against a state law that prohibits police from using drones for surveillance.
At War is Boring, David Axe looks at how the U.S. Air Force is using robotic F-16 fighters to test the teaming of manned and unmanned aircraft in combat.
At the Intercept, Shuaib Almosawa and Murtaza Hussain write that family members of individuals killed in a U.S. drone strike are disputing claims that the men were members of al-Qaeda.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and ETH Zurich in Switzerland are developing systems that enable drones to maintain the exact framing of a scene specified by the operator. (Engadget)
The U.S. Naval Research Laboratory is working to develop solar-powered unmanned aircraft that can remain aloft for extended periods. (R&D Magazine)
An official at the U.S. Army’s Tank Automotive Research, Development and Engineering Center has said that the service’s next tank will have the ability to operate aerial drones. (War is Boring)
Researchers at the Georgia Institute of Technology are developing drone swarms in which the aircraft are protected by a virtual bumper area that prevents them from crashing into each other. (Engadget)
Meanwhile, the Air Force Research Laboratory presented a number of unmanned aircraft projects at the second biennial Department of Defense Lab Day, including the Ninja counter-drone system and the Low Cost Attritable Aircraft Technology Program. (Press Release)
The Unmanned Mine Counter Measure Integrated System, an unmanned undersea vessel built by Russian shipyard SNSZ, has been declared fully operational with the Kazakh Navy. (Press Release)
Singapore-based firm Zycraft announced that its Independent USV completed a test in which it was deployed in the South China Sea for a continuous period of 22 days. (IHS Jane’s 360)
The U.S. Air Force Research Laboratory is developing small unmanned aircraft that can be launched from AC-130 gunships or similar manned aircraft. (IHS Jane’s 360)
During a trial operation off the coast of San Clemente Island, a U.S. Navy MQ-8B Fire Scout drone helicopter served as a laser designation platform for a Hellfire missile fired from an MH-60S Knighthawk helicopter. (Press Release)
Drones at Work
Japanese fighter jets were scrambled in response to a drone operated by a Chinese vessel over disputed waters in the East China Sea. (Nikkei Asian Review)
A Jordanian Air Force F-16 fighter jet shot down a drone that was flying near the Syrian border. (IHS Jane’s 360)
Officials in Elk City, Oklahoma enlisted students from Embry-Riddle Aeronautical University to survey damage from a recent tornado with a drone. (News9)
The West Australia Police air wing is planning to test drones for a range of operations including investigations and search and rescue missions. (The West Australian)
Researchers using drones to study narwhals found that the animals appear to use use their long, sharp tusks to stun their prey. (New Atlas)
U.S. startup DroneSeed has obtained FAA approval to operate multiple drones carrying agricultural payloads. (Unmanned Aerial Online)
Police in Brick Township, New Jersey have purchased a drone for search and rescue and scene documentation. (App.com)
As part of an inauguration ceremony for an unmanned aircraft runway, Virginia governor Terry McAuliffe flew in a remotely piloted airplane. (Press Release)
The Israeli Air Force is planning to replace its Sea Scan manned maritime patrol aircraft with the IAI Heron 1, a medium-altitude long-endurance drone. (UPI)
Industry Intel
DroneWERX, a new initiative by the U.S. Special Operations Command and Strategic Capabilities Office, will focus on rapidly acquiring and fielding new drones and robots, swarms, and machine learning and artificial intelligence technologies for special operations. (DefenseNews)
The U.S. Air Force awarded General Atomics Aeronautical Systems a $400 million contract for 36 MQ-9 Reaper aircraft. (DoD)
The U.S. Navy awarded Northrop Grumman Systems a $303.9 million contract modification for three Lot 2 MQ-4C Triton unmanned aircraft and associated equipment. (DoD)
Silverstone, the British racing circuit, has reportedly contracted Drone Defence to provide counter-drone systems for the Formula 1 race in July. (The Sun)
The University of Aarhus in Denmark awarded YellowScan a $114,316 contract for a UAS LiDAR solution for surveying. (Ted)
Saudi Arabia launched the Saudi Arabian Military Industries, a state-owned military industrial company that will manufacture drones, as well as a variety of other military systems and equipment. (Reuters)
For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.
In a three-year competition, five international teams competed to develop a robot for routine-, inspection- and emergency operations on oil & gas sites. Frequently, gas leaks on oil drilling rigs can cause an increased risk to safety and the environment.
The acronym ARGOS stands for Autonomous Robot for Gas and Oil Sites, which suggests that the robot independently performs assigned tasks. If necessary, an operator can intervene at any time via a satellite-based connection from land and take control of the robot.
The robot, developed by taurob GmbH together with TU Darmstadt, can read pointer instruments, fill level displays as well as valve positions using cameras and laser scanners. It can measure temperatures and gas concentrations, detect abnormal noises, obstacles and people around them, and safely manoeuvre on wet stairs. Adverse environmental conditions such as heavy rain, extreme temperatures and wind speeds do not pose a problem.
“Our robot is also the first fully automated inspection robot in the world that can be used safely in a potentially explosive atmosphere,” says Dr Lukas Silberbauer, who together with his colleague Matthias Biegl, founded the company taurob in 2010. The reason behind it, the robot is already fully ATEX certified so that it doesn’t trigger an explosion while operating in potentially explosive gases.
The Austrian company could capitalize on their knowledge obtaining this certification during their first project: a robot for firefighters.
“When we heard about Total’s contest, we immediately realized that this would be a great opportunity for us,” says Matthias Biegl.
Total has announced that it will use the new robots starting from 2020 on its oil drilling rigs.
The project was supported by FFG (Österreichische Forschungsförderungsgesellschaft) in the context of EUROSTARS (Co-financed by the EU).
For decades robotic exoskeletons were the subject of science fiction novels and movies. But in recent years, exoskeleton technology has made huge progress towards reality and exciting research projects and new companies have surfaced.
Typical applications of today’s exoskeletons are stroke therapy or support of users with a spinal cord injury, or industrial applications, such as back support for heavy lifting or power tool operation. And while the field is growing quickly, it is currently not easy to get involved. Learning materials or exoskeleton courses or classes are not widely available yet. This has made it difficult, as learning about exoskeletons is not possible by theory alone, but ideally, involves practical hands-on experience (feel it understand it). Unfortunately, the necessary exoskeleton hardware is expensive and not usually available to students or hobbyists wanting to explore the field.
This is the motivation behind the EduExo kit, a 3D-printable, Arduino-powered robotic exoskeleton now on Kickstarter. The goal of the project is to make exoskeleton technology available, affordable and understandable for anyone interested.
The EduExo is a robotic exoskeleton kit that you assemble and program yourself. The kit contains all the hardware you need to build an elbow exoskeleton. An accompanying handbook contains a tutorial that will guide you through the different assembly steps. In addition, the handbook provides background information on exoskeleton history, functionality and technology to offer a well-rounded introduction to the field.
The kit is being developed for the following users:
High school and college students who want to learn about robotic exoskeleton technology to prepare themselves for a career in exoskeletons and wearable robotics.
Makers and hobbyists who are looking for an interesting and entertaining project in a fascinating field.
Teachers and professors who want to set up exoskeleton courses or labs. The EduExo classroom set provides additional teaching material to facilitate the preparation of a course.
The main challenge of the project was to design exoskeleton hardware that is both cheap enough to be affordable for a high school student, but still complex enough to be an appropriate learning platform that teaches relevant and state of the art exoskeleton technology.
To lower the price, mostly off-the-shelf components are used. The hardware is a simple one degree of freedom design for the elbow joint. The EduExo box comes fully equipped and contains the handbook, exoskeleton structure, preassembled cuffs, an Arduino Microcontroller to control the device, a force sensor (and amplifier) to measures the human-robot interaction, a servo motor for the actuation and all the small parts needed to assemble and wire the exoskeleton.
The exoskeleton’s structure is 3D printable and users with access to a 3D printer can produce the parts themselves. Therefore, a maker edition is available for people who prefer doing everything themselves.
The software required to program the exoskeleton and to create computer games can be downloaded for free (Arduino IDE to program the Microcontroller and the Unity 3D game engine).
Despite the comparatively simple design, the EduExo will teach its users many interesting aspects of exoskeleton technology. This includes how the mechanical design resembles the human anatomy, how to connect the sensors and the electronics board, and how to design and program an exoskeleton control system. Further, it explains how to connect the exoskeleton to a computer and learn how to use it as a haptic device in combination with a self-made virtual reality simulation or computer game.
A ‘muscle control’ extension is offered separately that explains how to measure the exoskeleton user’s muscle activity and use it to control the device.
The development of the kit is currently in its final phase and it is already possible to order it through the ongoing Kickstarter campaign. Shipping is scheduled for August 2017.
Additional information can be found on the EduExo website: www.eduexo.com
A record number of teams submitted beautiful robot-created artwork for the second year of this 5-year worldwide competition. In total, there were 38 teams from 10 countries who submitted 200 different artworks!
Winners were determined based on a combination of public voting (over 3000 people with a Facebook account), judges consisting of working artists, critics, and technologists, and by how well the team met the spirit of the competition—that is, to create something beautiful using a physical brush and robotics, and to share what they learned with others. Learn more about the goals of the contest and its rules here.
Teams are encouraged to hang on to their artwork as there will be a physical exhibition of robotic-created artwork following next year’s competition (Summer 2018). This exhibition, most likely in Seattle, WA, will showcase winners of the 2017 and 2018 competition. The goal is to test Andy Warhol’s theory that “You know it’s ART, when the check clears.”
… and now, for the team winners of the 2017 Robot Art Competition:
This project from Columbia shows a high level of skill with brushstrokes. This, along with some deep learning algorithms, produces some lovely paintings from sources or scratch. When they used a photograph as the source they were able to create plenty of variation from the original and using a fluid medium to produce an atmospheric and open-ended visual experience. Much of their work had a painterly and contemporary presentation.
2nd Place—$25,000—CMIT ReART, Kasetsart University (Thailand)
Artists program this robot brushstroke by stroke, using a haptic recording system that generates volumes of data about the position of the brush and the forces being exerted. When re-played, reART will generate a perfect reproduction of the original strokes. Haptic recording and playback allows for remarkably high-quality inkbrush drawings
One of the aspects of a success commercial artist is to know their market. In this case, the students chose to paint the popular and recently deceased Thai King and therefore were able to get many students to vote for their team.
All of this teams offerings are important. They are aiming at an interpretation of the optical properties of oil paint and applying them to deep learning. Spontaneous paint, “mosaicing” of adjacent tones, layering effects and the graphical interplay between paint strokes of varying textures, are all hand/eye, deeply neurally sophisticated aspects of oil painting that this team is trying to evince together with a robot.
This team also won the $5,000 prize for technical contribution.
4th Place $6000—e-David—University of Konstanz (Germany)
Using software that enables a collaboration between a human artist and roboticist, e-David mimics closely the approach a human painter would work on the canvas. An accompanying academic paper goes into deep detail about their approach to the project.
5th Place $4000—JACKbDU—New York University Shanghai (China)
Clean lines, interesting abstractions, both familiar and abstract subjects made me appreciate the overall body. In particularly, purely from aesthetics, these were found to be most compelling.
6th Place $2000—HEARTalion—Halmstad University (Sweden)
If this body of work was exhibited at a gallery and I was told that the artist aimed to capture emotion through color, composition, and textures—I would buy (says one of our professional judges). The bold brush strokes, cool or warm templates to match the emotional quality expressed, all made sense—but felt alive. Loved them.
Loved the composition of both of these pieces. Both pieces bring an emotive calm with just a few colors, technique, and simplicity. Gem-style optical sketch, intimate scale, good formal balance, medium application and value panels.
This project uses the precision of a robot to take crude brush dabs to astonishing levels. By incorporating 3d scans into its image generation, this bot operates with a complete understanding of its subject; much like a human painter would have someone sitting for them. The team was kind enough to publish their 3d models and source code, so others can learn from and build off of their work.
9th Place $2000—CARP—Worcester Polytechnic Institute (USA)
Very good line. Lots of space in this drawing and central form is simultaneously dense and traversable. Good architectural reference in clear tooling, but enlivened by sumo type or Franz Kline strokes. If the shapes can continue to be explored while maintaining the hand/machine balance, this will remain a strong venue.
Slightly synesthetic. Beautiful and balanced fusion of technical and handmade, crystalline and organic.
10th Place $2000—BABOT—Massachusetts Institute of Technology (USA)
MIT’s BABOT was able to produce this and other inspiring vistas.