Archive 31.05.2017

Page 1 of 3
1 2 3

Livestream: Committee to take stance in the European debate on artificial intelligence

524th plenary session, Main debating chamber, European Parliament. Credit: EESC

Today, the European Economic and Social Committee (EESC) is going to debate its stance in the European discussion on AI and will express conflicting views on certain issues, especially on the question of legal personality for robots. The report, which has been drawn up by a Dutch rapporteur, Ms Catelijne Muller, member of the Workers’ Group, will be debated at the EESC’s plenary in Brussels on 31 May.

Click here to watch the livestream. Live coverage will begin at 14:30 with the debate on AI at 16:00 CEST.

You can also download and read referral related documents about the consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society here.

From the EESC website:

“Artificial Intelligence (AI) technologies offer the potential for creating new and innovative solutions to improve people’s lives, grow the economy, and address challenges in health and wellbeing, climate change, safety and security. Like any disruptive technology, however, AI carries risks and presents complex societal challenges in several areas such as labour, safety, privacy, ethics, skills and so on.

A broad approach towards AI, covering all its effects (good and bad) on society as a whole, is crucial. Especially in a time where developments are accelerating.”

RoboCup video series: Rescue league

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.

To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.

Robotics isn’t only about playing soccer it’s also about helping people. This week, we take a look at what it takes to be part of RoboCupRescue. You’ll hear about the history and ambitions of RoboCup from the trustees, and inspiring teams from around the world.

Short version:

Long version:

Can’t wait to watch the rest? You can view all the videos on the RoboCup playlist below:

https://www.youtube.com/playlist?list=PLEfaZULTeP_-bqFvCLBWnOvFAgkHTWbWC

Please spread the word! and if you would like to join a team, check here for more information.

f you liked reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Faster, more nimble drones on the horizon

Engineers at MIT have come up with an algorithm to tune a Dynamic Vision Sensor (DVS) camera, simplifying a scene to its most essential visual elements and potentially enabling the development of faster drones. Image: Jose-Luis Olivares/MIT

There’s a limit to how fast autonomous vehicles can fly while safely avoiding obstacles. That’s because the cameras used on today’s drones can only process images so fast, frame by individual frame. Beyond roughly 30 miles per hour, a drone is likely to crash simply because its cameras can’t keep up.

Recently, researchers in Zurich invented a new type of camera, known as the Dynamic Vision Sensor (DVS), that continuously visualizes a scene in terms of changes in brightness, at extremely short, microsecond intervals. But this deluge of data can overwhelm a system, making it difficult for a drone to distinguish an oncoming obstacle through the noise.

Now engineers at MIT have come up with an algorithm to tune a DVS camera to detect only specific changes in brightness that matter for a particular system, vastly simplifying a scene to its most essential visual elements.

The results, which they presented at the IEEE American Control Conference in Seattle, can be applied to any linear system that directs a robot to move from point A to point B as a response to high-speed visual data. Eventually, the results could also help to increase the speeds for more complex systems such as drones and other autonomous robots.

“There is a new family of vision sensors that has the capacity to bring high-speed autonomous flight to reality, but researchers have not developed algorithms that are suitable to process the output data,” says lead author Prince Singh, a graduate student in MIT’s Department of Aeronautics and Astronautics. “We present a first approach for making sense of the DVS’ ambiguous data, by reformulating the inherently noisy system into an amenable form.”

Singh’s co-authors are MIT visiting professor Emilio Frazzoli of the Swiss Federal Institute of Technology in Zurich, and Sze Zheng Yong of Arizona State University.

Taking a visual cue from biology

The DVS camera is the first commercially available “neuromorphic” sensor — a class of sensors that is modeled after the vision systems in animals and humans. In the very early stages of processing a scene, photosensitive cells in the human retina, for example, are activated in response to changes in luminosity, in real time.

Neuromorphic sensors are designed with multiple circuits arranged in parallel, similarly to photosensitive cells, that activate and produce blue or red pixels on a computer screen in response to either a drop or spike in brightness.

Instead of a typical video feed, a drone with a DVS camera would “see” a grainy depiction of pixels that switch between two colors, depending on whether that point in space has brightened or darkened at any given moment. The sensor requires no image processing and is designed to enable, among other applications, high-speed autonomous flight.

Researchers have used DVS cameras to enable simple linear systems to see and react to high-speed events, and they have designed controllers, or algorithms, to quickly translate DVS data and carry out appropriate responses. For example, engineers have designed controllers that interpret pixel changes in order to control the movements of a robotic goalie to block an incoming soccer ball, as well as to direct a motorized platform to keep a pencil standing upright.

But for any given DVS system, researchers have had to start from scratch in designing a controller to translate DVS data in a meaningful way for that particular system.

“The pencil and goalie examples are very geometrically constrained, meaning if you give me those specific scenarios, I can design a controller,” Singh says. “But the question becomes, what if I want to do something more complicated?”

Cutting through the noise

In the team’s new paper, the researchers report developing a sort of universal controller that can translate DVS data in a meaningful way for any simple linear, robotic system. The key to the controller is that it identifies the ideal value for a parameter Singh calls “H,” or the event-threshold value, signifying the minimum change in brightness that the system can detect.

Setting the H value for a particular system can essentially determine that system’s visual sensitivity: A system with a low H value would be programmed to take in and interpret changes in luminosity that range from very small to relatively large, while a high H value would exclude small changes, and only “see” and react to large variations in brightness.

The researchers formulated an algorithm first by taking into account the possibility that a change in brightness would occur for every “event,” or pixel activated in a particular system. They also estimated the probability for “spurious events,” such as a pixel randomly misfiring, creating false noise in the data.

Once they derived a formula with these variables in mind, they were able to work it into a well-known algorithm known as an H-infinity robust controller, to determine the H value for that system.

The team’s algorithm can now be used to set a DVS camera’s sensitivity to detect the most essential changes in brightness for any given linear system, while excluding extraneous signals. The researchers performed a numerical simulation to test the algorithm, identifying an H value for a theoretical linear system, which they found was able to remain stable and carry out its function without being disrupted by extraneous pixel events.

“We found that this H threshold serves as a ‘sweet-spot,’ so that a system doesn’t become overwhelmed with too many events,” Singh says. He adds that the new results “unify control of many systems,” and represent a first step toward faster, more stable autonomous flying robots, such as the Robobee, developed by researchers at Harvard University.

“We want to break that speed limit of 20 to 30 miles per hour, and go faster without colliding,” Singh says. “The next step may be to combine DVS with a regular camera, which can tell you, based on the DVS rendering, that an object is a couch versus a car, in real time.”

This research was supported in part by the Singapore National Research Foundation through the SMART Future Urban Mobility project.

Talking machines: Graphons and “inferencing”

In episode two of season three Neil takes us through the basics on dropout, we chat about the definition of inference (It’s more about context than you think!) and hear an interview with Jennifer Chayes of Microsoft.

If you liked this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

How a challenging aerial environment sparked a business opportunity

We develop the fastest, smallest and lightest distance sensors for advanced robotics in challenging environments. These sensors are born from a fruitful collaboration with CERN while developing flying indoor inspection systems.

How we began started with a challenge: the European Centre for Nuclear Research (CERN) asked if we could use drones to perform fully autonomous inspections within the tunnel of the Large Hadron Collider. Now if you haven’t seen it, it’s a complex environment; perhaps one of the most unfriendly environments imaginable for fully autonomous drone flight. But we accepted the mission, rolled-up our sleeves, and got to work. As you can imagine, the mission was very challenging! 

Large Hadron Collider tunnel. Credit: CERN

One of the main issues we faced was finding suitable sensors to place on the drone for navigation and anti-collision. We got everything on the market that we could find and tried to make it work. Ultrasound was too slow and the range too short. Lasers tend to be too big, too heavy and consumed too much power. Monocular and stereo vision was highly complex and placed a huge computational burden on the system and even then was prone to failure. It became clear that what we really needed, simply didn’t exist! That’s how the concept for TeraRanger’s brand of sensors was born.

Having failed to make any of the available sensing technologies work at the performance levels required, we came to the conclusion that we would need to build the sensors from the ground up. It wasn’t easy (and still isn’t) but finally, we had something small enough, light enough (8g), with fast refresh rates and enough range to work well on the drone. Leading academics in robotics could see potential using the sensor and wanted some for themselves, then more people wanted them, and before too long we had a new business.

Millimetre precision wasn’t vital for the drone application, but the high refresh rates and range were. And by not using a laser emitter we were able to give the sensor a 3 degree field of view, which for many applications proved to be a real boon, giving a smoother flow of data when faced with uneven surfaces and complex and cluttered environments. It also enabled the sensor to be fully eye-safe and the supply current to remain low.

 

Plug and play multi-axis sensing

Knowing that we would often need to use multiple sensors at the same time, we also designed-in support for multi-sensor, multi-axis requirements. Using a ‘hub’ we can simultaneously connect up to eight sensors to provide a simple to use, plug and play approach to multi-sensor applications. By controlling the sequence in which sensors are fired (along with some other parameters) we are able to limit or eliminate, the potential for sensor cross-talk and then stream an array of calibrated distance values in millimetres, and at high speed. From a user’s’ perspective this is about as simple as it gets since the hub also centralises power management.

TeraRanger Tower

There’s no need to get in a spin

Using that same concept we continued to push the boundaries. A significant evolution has been our approach to LiDAR scanning – not just from a hardware point of view (although that is also different) but from a conceptual approach too. We’ve taken the same philosophy of small size, lightweight sensors with very high refresh rates (up to 1kHz) and applied that to create a new style of static LiDAR. Rather than rotating a sensor or using other mechanical methods to move a beam, TeraRanger Tower simultaneously monitors eight axis (or more if you stack multiple units together) and streams an array of data at up to 270Hz!

Challenging the point-cloud conundrum

With no motors or other moving parts, the hardware itself has many advantages, being silent, lightweight and robust, but there is also a secondary benefit from the data. Traditional thinking amongst the robotics community is that to perform navigation, Simultaneous Localisation and Mapping (SLAM) and collision avoidance you have to “see” everything around you. Just as we did at the start of our journey, people focus on complex solutions – like stereo vision – gathering millions of data points which then requires complex and resource-hungry processing. The complexity of the solution – and of the algorithms – has the potential to create many failure modes. Having discovered for ourselves that the complicated solution is not always necessary, our approach is different in that, we monitor fewer points. But,  we monitor them at very fast refresh rates to ensure that what we think we see, is really there. As a result, we build less intense point clouds, but with very reliable data. This then requires less complex algorithms and processing and can be done with lighter-weight computing. The result is a more robust, and potentially safer solution, especially when you can make some assumptions about your environment, or harness odometry data to augment the LiDAR data. Many times we were told you could never do SLAM monitoring on just eight points, but we proved that wrong.

Coming full circle: There are no big problems, just a lot of little problems

All of this leads back to our original mission. We’ve not solved it yet, but recently we mounted TeraRanger Tower to a drone and proved, for the first time we believe, that a static LiDAR can be used for drone anti-collision. The Proof of Concept was quickly put together to harness code developed for the open source APM 3.5 flight controller, with Terabee writing drivers to hook into the codebase. Anti-collision is just one step in the journey to fully autonomous drone flight and we are still on the wild-ride of technology, but definitely, we are taming the beast!

If you have expertise in drone collision avoidance and wish to help us overcome the remaining challenges, please contact us at teraranger@terabee.com. For more information about Terabee and our TeraRanger brand of sensors, please visit our website.

Live coverage of #ICRA2017

The next edition of IEEE 2017 International Conference on Robotics and Automation (ICRA) is in Singapore! The event kicked off on Monday 29 May and is running until 3 June. ICRA is one of the leading international forums for robotics researchers to present their work.

The conference theme, “Innovation, Entrepreneurship, and Real-world Solutions”, underscores the need for innovative R&D talent, dynamic and goal-driven entrepreneurs and practitioners using robotics and automation technology to solve challenging real-world problems such as shortage of labour, an ageing society, and creating sustainable environments.

ICRA 2017 will introduce a new Robotics Innovation & Entrepreneurship Forum, alongside an industry forum, a government forum, an ASEAN & emerging country forum, a public forum (ICRA-X) centred on the conference theme, and an ethics forum.

Plenary talks to feature: Chris Gerdes, Stanford University, USA, presenting on Modeling the possibilities: From the Chalkboard to the Race Track to the World Beyond (Tuesday morning); Hiroaki Kitano, Sony Computer Science Laboratories, Inc., Japan, presenting on Nobel Turing Challenge: Grand Challenge of AI, Robotics, and Systems Biology (Wednesday morning); and Kerstin Vignard, United Nations Institute for Disarmament Research presenting on Framing the International Discussion on the Weaponization of Increasingly Autonomous Technologies (Thursday morning).

The conference will also host a number of high-profile keynotes, technical paper sessions, workshop and tutorial sessions, and exhibitions.

There will also be four Robot Challenges to take place on 30-31 May:

Workshops & Tutorials

25 workshop/tutorial sessions are available for junior researchers. The sessions are to provide interaction and foster collaboration between young researchers, with the opportunity to listen to, and closely interact, with senior experts. The next set of workshops will be on Friday, 2 June.

Special sessions on Emerging Robotics Technology

This year, ICRA 2017 has invited robot experts to share the recent technological advancement in the field of robotics. These special sessions will focus on novel and creative approaches in designing or developing robots for automation, medical or surgical tasks, and space exploratory mission. The event will be held on Tuesday, 30 May.

Audrow will be on site interviewing for upcoming Robots Podcasts; check back for the latest coverage and highlights!

The Drone Center’s Weekly Roundup: 5/29/17

The DJI Spark drone. Image via dronetrest.com

May 22, 2017 – May 28, 2017

News

A U.S. drone strike reportedly killed three members of the Pakistani Taliban. According to the Associated Press, the strike targeted a compound in Khost province, Afghanistan, although other sources indicate that the strike was in Pakistan.

The Trump administration is reportedly seeking new powers from Congress to track and destroy wayward drones inside the United States. A draft of the proposed law obtained by the New York Times would allow the federal government to intercept any drone that is viewed as a threat or is flying over a specially designated area such as military bases. According to the Times, the draft bill is currently being discussed in classified briefings on Capitol Hill.

A judge in North Dakota has acquitted a drone operator arrested at the Dakota Access Pipeline protests last year. Aaron Shawn Turgeon was charged with reckless endangerment after police claimed that he had flown close to a surveillance airplane. Footage from cell phones and from Turgeon’s drone contributed to his acquittal. (Bismarck Tribune)

Commentary, Analysis, and Art

The U.S. House Energy and Commerce Committee held a hearing on disruptive technologies and companies.

At Bellingcat, Nick Waters considers trends in ISIS drone bombing tactics based on a database of 121 strikes.

At the Los Angeles Times, Nabih Bulos examines the role that ISIS drones are playing on the battlefield in Mosul.

The editorial board at the Los Angeles Times argues that drones should not be considered the same as toys, even in the wake of the court ruling that struck down the FAA drone registration database.

At East Pendulum, Henri Kenhmann takes a closer look at the Chinese Air Force’s Wing Loong strike drone squadron.

At the Wall Street Journal, Paul J. Davies writes that profits are eluding drone manufacturers in spite of the popularity of consumer drones.

At CNET, Rick Broida surveys the cheap, $20 quadrotor drones that are currently available on the market.

Meanwhile, at Time, John Patrick Pullen looks for the perfect selfie drone.

At the Verge, Sean O’Kane writes that a new DJI policy will remove functionality from their drones unless the user registers with the company.

At Drone360, David O’Connor argues that online retailers are embracing delivery drones out of a desire to exploit consumer tendencies for instant gratification.
At TechCrunch, Brian Heater argues that in spite of new technologies and systems, consumer drones are not quite “mainstream” yet.

At the Taiwan News, Judy Lin writes that Taiwan’s push to develop a medium-altitude long-endurance surveillance drone is still in its early stages.

At the Augusta Chronicle, Thomas Gardiner writes that a Department of Energy investigation into drone sightings near nuclear sites in Georgia has not confirmed any of the reported sightings.

At the Financial Times, Jennifer Thompson looks at the impact that drones have had on workers in different industries.

At the Australian Financial Review, Andrew Burke examines the role that drones had in making the new Pirates of the Caribbean and considers how drones are changing filmmaking.

Know Your Drone

The Defense Advanced Research Projects Agency and Boeing are teaming up to build a reusable unmanned space plane called Phantom Express. (Popular Mechanics)

At a launch in New York City, commercial drone maker DJI unveiled Spark, a small consumer quadcopter that can be controlled with hand gestures. (TechCrunch)

In a test flight, a General Atomics Aeronautical Systems SkyGuardian drone remained airborne for 48 hours, a new record for a Predator-series aircraft. (Unmanned Systems Technology)

Swedish auto maker Volvo is testing an autonomous garbage truck. (AUVSI)

Swiss drone maker Aeroscout unveiled the Scout B-330, a 50 kg rotary drone that can fly for up to three hours. (GPS World)

Defense firm Textron announced that it has successfully test fired its Fury lightweight precision guided missile from a Shadow tactical drone. (Unmanned Systems Technology)

Israeli defense company UVision has developed a new extended-range loitering munition called the Hero-400EC with an endurance of two hours. (FlightGlobal)

Atmos UAV has unveiled the Marlyn, a vertical take-off and landing fixed-wing drone for commercial applications. (GIM International)

Belarus’s Indela Design Bureau has developed a military vertical take-off and landing drone called Bur. (IHS Jane’s 360)

In a test sponsored by the U.S. Navy, Lockheed Martin launched a Vector Hawk unmanned aerial vehicle from a Marlin MK2 undersea drone. (Popular Mechanics)

Drone maker SwellPro unveiled the Splash Drone 3, a waterproof multirotor drone that can float on water. (The Verge)

Otsaw, a Singapore-based startup, has developed an autonomous security ground robot equipped with a multirotor drone. (Mashable)

The U.S. Naval Undersea Warfare Center is testing a biomimetic minehunting unmanned undersea vehicle. (IHS Jane’s 360)

The U.S. Navy has issued a Request for Information relating to a planned large unmanned surface vehicle program. (FBO)

The U.S. Office of Naval Research and Naval Surface Warfare Command have developed an undersea remotely operated vehicle to assist naval dive teams. (IHS Jane’s 360)

Drones at Work

Chinese retailer JD.com has been granted government approval to operate heavy-load delivery drones along certain fixed routes. (Vox)

A drone crashed into the stands at a San Diego Padres and Arizona Diamondbacks baseball game in San Diego. (Washington Post)

U.S. Senators Feinstein (D-CA), Lee (R-UT), Blumenthal (D-CT), and Cotton (R-AR) have introduced a bill that would grant local governments the authority to regulate drone use. (Press Release)

The European Space Agency conducted a test in which it used a drone to help explore a cave system in Sicily. (Press Release)

Australia’s Civil Aviation Safety Authority has released a mobile app that shows drone operators where they can and cannot fly. (ABC)

Meanwhile, farmers in Australia are using drones to help muster herds of Merino sheep. (ABC)

Somali police have acquired five aerial surveillance drones donated by a former U.S. special operations officer. (Reuters)

The Idaho State Police have acquired four drones for a range of operations. (Idaho State Journal)

A food blogger in New Zealand used a drone to pick up his fried chicken from a KFC for him. (Mashable)

The U.S. National Oceanic and Atmospheric Administration is using three unmanned aircraft to observe atmospheric changes that could lead to severe thunderstorms. (Press Release)

A number of drone companies have teamed up to provide 3D drone imagery following flooding in Colombia. (UAV Expert News)

An Israeli Skylark drone crashed during a flight over southern Lebanon. (The New York Times)

The non-profit Lindbergh Foundation is using AI developed by Neurala to analyze footage from anti-poaching drones. (Engadget)

Park police in New York State used a drone to help rescue a dog from Letchworth Gorge. (Press Release)

The North Dakota National Guard in Fargo is slated to receive two MQ-9 Reaper drones for training this summer. (MPR News)

North Dakota was host to a simulated disaster exercise in order to test the role that drones could play in disaster response. (KVRR)

The Israeli military announced that it will deploy unmanned ground vehicles to patrol its border with the Gaza Strip in the coming years. (The Algemeiner)

The Yuku Baja Muliku Rangers in Queensland, Australia are using drones to conduct environmental inspections. (Innovators Magazine)

Industry Intel

Echodyne, a startup developing radar systems for drones, raised $29 million in a funding round led by Bill Gates. (TechCrunch)

The Defense Advanced Research Projects Agency awarded the University of Washington a base $3.5 million contract for the Aerial Dragnet program. (FBO)

The U.S. Navy awarded Northrop Grumman Systems a $49.4 million advance acquisition contract for components for the MQ-4C Triton surveillance drone. (DoD)

The U.S. Navy awarded Northrop Grumman Systems a $13 million contract for one multi-function active sensor for the MQ-4C Triton surveillance drone. (DoD)

The U.S. Navy awarded Northrop Grumman Systems a $65.5 million contract modification for logistic support and sustainment for the Broad Area Maritime Surveillance-Demonstrator (MQ-4) program. (DoD)

The U.S. Navy awarded Insitu a $1.8 million contract to train service members on the RQ-21A Blackjack drone. (FBO)

The U.S. Navy awarded Raytheon a $14.7 million contract for the AN/AQS-20, a mine hunting sonar that is designed to be towed by unmanned undersea and surface vehicles, as well as by manned platforms. (DoD)   

The U.S. Air Force awarded Radio Hill Technologies a $2.8 million contract for counter-drone systems. (FBO)

The Research & Development Corporation of Newfoundland and Labrador awarded Kraken Sonar Systems $553,609 in funding to support development of the ThunderFish autonomous underwater vehicle. (Press Release)

American Robotics, a company that develops drones for commercial farming applications, secured $1.1 million in a funding round led by Brain Robotics Capital. (Press Release)

DroneSAR, a Dublin-based startup that seeks to develop drone software for emergency response, will receive $55,880 in funding as part of the European Space Agency Business Incubation Centre. (Silicon Republic)

Ontario-based SkyX received $4 million in funding from Kuang-Chi Group to continue developing self-charging drones for long-range industrial inspection missions. (Unmanned Aerial Online)  

L3 Technologies has acquired Open Water Power, a Massachusetts-based company that develops high-density aluminium batteries for unmanned undersea vehicles. (IHS Jane’s 360)

Israeli drone manufacturer Aeronautics will make an initial public offering by mid-June. (IHS Jane’s 360)

BIKI, an autonomous commercial underwater drone equipped with a 4K camera, has reached its fundraising goal on Kickstarter. (News Ledge)

Alta Devices and PowerOasis announced a partnership to develop a solar/lithium-ion hybrid battery for drones. (Drone360)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Robots Podcast #235: Locus Robotics, with Rick Faulk


In this episode, Abate De Mey speaks with Rick Faulk, CEO of Locus Robotics, about warehouse automation with collaborative robots. At Locus Robotics, they increase the productivity of workers in e-commerce warehouses by using robot helpers to transport items that are passed to them by the workers. The lightweight autonomous robots move at a similar pace to their co-workers, use LIDAR and computer vision to detect people and avoid collisions. This allows people to share the warehouse floor with the robots. The collaborative robotic system is lightweight and can be adapted to existing warehouses with minimal alterations.

 

Rick Faulk
Rick Faulk, CEO at Locus Robotics, leads the executive team and is responsible for overall strategy and execution. He has over thirty years of experience in executive management, sales and marketing for some of the world’s most successful technology companies such as Intronis, j2 Global, WebEx, Intranets.com, Lotus Development, Mzinga, and PictureTel. Rick also currently sits on various boards including INfluitive and Hostway, and is an advisor to a number of early-stage companies.

Links

New Horizon 2020 robotics projects, 2016: BADGER

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). 

Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features BADGER, a Robot for Autonomous Underground Trenchless Operations, Mapping and Navigation.

Objectives

The goal of the proposed project is the design and development of an integrated underground robotic system capable for autonomous construction of subterranean small-diameter, highly curved tunnel networks and for localization, mapping and autonomous navigation during its operation. The proposed robotic system will enable the execution of tasks in different application domains of high societal and economic impact including trenchless constructions (cabling and piping) installations, etc., search and rescue operations, remote science and exploration applications.

Expected impact

The expected strategic impact of BADGER project focuses in:

  1. Introduce advanced robotics technologies, including intelligent control and cognition capabilities, to significantly increase the European competitiveness,
  2. Drastically reduce the traffic congestion and pollution in the European urban environments increasing, in this way, the quality of life of people,

Enabling technologies for new potential applications: search and rescue, mining and quarrying, civil applications, mapping, etc.

Partners

UNIVERSITY CARLOS III OF MADRID (ES) 
UNIVERSITY OF GLASGOW (UK)
CENTRE FOR RESEARCH AND TECHNOLOGY HELLAS (GR) 
IDS GEORADAR (I)
SINGULARLOGIC (GR)
TRACTO-TECHNIK (D)
ROBOTNIK (ES) 
UNIVERSITY CARLOS III OF MADRID (ES)

Coordinator: Prof. Carlos Balaguer
balaguer@ing.uc3m.es
Twitter: @badger_project

Project website: www.badger-robotics.eu

If you enjoyed reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

The Robot Academy: An open online robotics education resource

The Robot Academy is a new learning resource from Professor Peter Corke and the Queensland University of Technology (QUT), the team behind the award-winning Introduction to Robotics and Robotic Vision courses. There are over 200 lessons available, all for free.

Educators are encouraged to use the Academy content to support teaching and learning in class or set them as flipped learning tasks. You can easily create viewing lists with links to lessons or masterclasses. Under Resources, you can download a Robotics Toolbox and Machine Vision Toolbox, which are useful for simulating classical arm-type robotics, such as kinematics, dynamics, and trajectory generation.

The lessons were created in 2015 for the Introduction to Robotics and Robotic Vision courses. We describe our approach to creating the original courses in the article, An Innovative Educational Change: Massive Open Online Courses in Robotics and Robotic Vision. The courses were designed for university undergraduate students but many lessons are suitable for anybody, see the difficulty rating on each lesson.

Under Masterclasses, students can choose a subject and watch a set of videos related to that particular topic. Single lessons can offer a short training segment or a refresher. Three online courses, Introducing Robotics, are also offered.

Below are two examples of the single-course and masterclasses. We encourage everyone to take a look at the QUT Robot Academy by visiting our website.

Single Lesson

Out and about with robots

In this video, we look at a diverse range of real-world robots and discuss what they do and how they do it.

Masterclass

Robot joint control: Introduction (Video 1 of 12)

In this video, students learn how we make robot joints move to the angles or positions that are required to achieve the desired end-effector motion. This is the job of the robot’s joint controller. In the lecture, we will take discuss the realms of control theory.

Robot joint control: Architecture (video 2 of 12)

In this lecture, we discuss a robot joint is a mechatronic system comprising motors, sensors, electronics and embedded computing that implements a feedback control system.

Robot joint control: Actuators (video 3 of 12)

Actuators are the components that actually move the robot’s joint. So, let’s look at a few different actuation technologies that are used in robots.

To watch the rest of the video series, visit their website.

If you enjoyed this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

The Wolf: The Hunt Continues shows calculated precision for hacking into an network

The Wolf: The Hunt Continues Starring Christian Slater Presented by HP Studios | HP Source: HP Studios/YouTube

Here’s a video you will want to watch. “The Wolf: The Hunt Continues” is really an ad showing how a hacker can enter a network through an unprotected printer (or robot). Christian Slater stars as the evil hacker.

“There are hundreds of millions of business printers in the world and less than 2% are secure,” said Vikrant Batra, Global Head of Marketing for Printing & Imaging, HP. “Everyone knows that a PC can be hacked, but not a printer.” [Hence the need to inform about how easily a printer can be hacked and the consequences of that.]

Although not related to the recent WannaCry hack which held hundreds of thousands of companies ransom and downloaded millions of personal records before destroying billions more, HP,  this 7-minute terrifying advertisement for securing inconsequential devices dramatizes what can happen if we don’t stay a step ahead of the threats that are out there waiting to happen. As companies attempt to stream and analyze data from their Internet of Things (IoT) sensors and software and from varied pieces of equipment and sensors throughout their facilities, opportunities such as the one described in the HP video will certainly happen.

One that comes to mind is FANUC’s plan to network all its CNCs, robots and peripheral devices and sensors used in automation systems with the goal of optimizing up-time, maintenance schedules and manufacturing profitability. FANUC is collaborating with Cisco, Rockwell and Preferred Networks to craft a secure system which they’ve named FIELD. Let’s hope it works.

Fortune Magazine recently reported about consumer products that spy on their users by companies attempting to learn new business models based on data:

What do a doll, a popular set of headphones, and a sex toy have in common? All three items allegedly spied on consumers, creating legal trouble for their manufacturers.

In the case of We-Vibe, which sells remote-control vibrators, the company agreed to pay $3.75 million in March to settle a class-action suit alleging that it used its app to secretly collect information about how customers used its products. The audio company Bose, meanwhile, is being sued for surreptitiously compiling data—including users’ music-listening histories—from headphones.

For consumers, such incidents can be unnerving. Almost any Internet-connected device—not just phones and computers—can collect data. It’s one thing to know that Google is tracking your queries, but quite another to know that mundane personal possessions may be surveilling you too.

So what’s driving the spate of spying? The development of ever-smaller microchips and wireless radios certainly makes it easy for companies. As the margins on consumer electronics grow ever thinner, you can’t blame companies for investigating new business models based on data, not just on devices.

Aargh!

Robotics classes in temples: Tradition and innovation in Japan

Robotera: Is the fusion of robots (robo) and temples (tera, in Japanese). Buddhist temples in Japan are offering robotic classes to foster interest in future generations in Buddhist culture and temples.

As well as offering traditional activities—such as tea ceremonies, lectures, reading and crafting—temples in Japan have started working with the robot academies franchise Robo Done to increase their educational offerings and teach robotics to kids, ages 6 years and up.

These activities have been a great hit, drawing numerous kids and their parents to the temples. As a result, the number of temples that are offering robotic classes is increasing exponentially. From Daigoji temple in Kyoto to Shinagawaji temple in Tokyo, these innovative programming courses are helping a new generation to approach the traditional culture while promoting STEM education and robotics. The old customs are innovating to be closer to the 21st-century, without losing their roots. This is a clear example of the two faces of Japan: tradition and future.

Click here to read more at Robotera, or read more about Robo Done School here.

Xponential 2017 in images

HARRIS

The Xponential 2017 national conference was held May 8-11 by the Association for Unmanned Vehicle Systems International (AUVSI) in the Kay Bailey Hutchison Convention Center in Dallas, Texas. The event took place in the largest exhibit hall ever dedicated to unmanned systems and robotics, with over 370,000 square feet. It featured over 650 robotics organizations – companies, research institutions, universities, consultants, nonprofits and more – from the U.S. and countries worldwide.

Here’s a sample of images from the world’s largest tradeshow for unmanned systems. All images by Lucien Miller, CEO of innov8tivedesigns.com.

AERYON Labs
AIRBUS DEFENCE
FIRSTEC CO
Infrared Cameras Inc
PRODRONE
QinetiQ North America
Quasonix
Robotis
STAMPEDE
TEI
TELEDYNE TECHNOLOGIES
Tianjin Aurora UAV Technology

Click here to view the full gallery.

SoftBank invests $5 billion into Didi Chuxing and $4 billion more in Nvidia

SoftBank, the giant telecom company, is venturing out into the world of robotics and transportation services. DealStreet Asia said that SoftBank is trying to transform itself into the ‘Berkshire Hathaway of the tech industry’ with the recent launch of a $100 billion technology fund.

UPDATED 5/24/17: SoftBank’s acquisition of 4.9% of the outstanding shares of Nvidia Corp.

First SoftBank bought Aldebaran, the maker of the Nao and Romeo robots, and redirected them to produce the Pepper robot which has been sold in the thousands to businesses as a guide, information source and order taker, then bigger partnerships with Foxconn and Alibaba to manufacture and market Pepper and other consumer products, and most recently to establishing the $100 billion technology fund.

Recognizing that the telecom services market has matured, SoftBank is putting their money where they can to participate in the new worlds of robotics and transportation as a service. $5 billion in Didi Chuxing, China’s largest ride-sharing company, is a perfect example.

Didi Chuxing

Didi, which already serves more than 400 million users across China, provides services including taxi hailing, private car-hailing, Hitch (social ride-sharing), DiDi Chauffeur, DiDi Bus, DiDi Test Drive, DiDi Car Rental and DiDi Enterprise Solutions to users in China via a smartphone application.

Tencent, Baidu and Alibaba are big investors — even Apple invested $1 billion.

The transformation of the auto industry into one focused on providing transportation services is a moving target with much news, talent movement, investment and widely-varying forecasts. But all signs show that it is booming and growing.

For more information on this subject, read the views of Chris Urmson, previous CTO of Google’s self-driving car group, in my article entitled: Transportation as a Service: a look ahead.

SoftBank Group Corp. acquired a $4 billion stake in Nvidia Corp. making it the fourth-largest shareholder of the graphics chipmaker.

Nvidia

Nvidia, a gaming chipmaker, has been receiving a lot of media attention for their GPU deep learning AI which they call ‘the next era of computing’ — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world around their sensors.

Nvidia recently introduced the NVIDIA Isaac™ robot simulator, which utilizes sophisticated video-game and graphics technologies to train intelligent machines in simulated real-world conditions before they get deployed. The company also introduced a set of robot reference-design platforms that make it faster to build such machines using the NVIDIA Jetson™ platform.

“Robots based on artificial intelligence hold enormous promise for improving our lives, but building and training them has posed significant challenges. NVIDIA is now revolutionizing the robotics industry by applying our deep expertise in simulating the real world so that robots can be trained more precisely, more safely and more rapidly.”

The sunny side of the roboconomy in the Middle East

Jordan’s Fifth National Technology Parade. At the parade, university students showcased their grasp of modern technologies with projects spanning renewable energy and water to robotics. Photo Credit: UN Women/Hamza Mazra’awi CC

The Middle East and North Africa’s youthful, fast-urbanizing population are perfectly placed to embrace technology and reap the rewards of the Fourth Industrial Revolution.

Much has been written already about the arrival of the Fourth Industrial Revolution (4IR) and the opportunity that the convergence of its new technologies offers in terms of building value into production systems and economies around the world. In one sense, the playing field could be levelled out. Localized production is being made more feasible for many small producers, setting developing communities on a path towards self-sufficiency, while falling costs could enable factories of all sizes to boost their productivity levels.

However, on the opposite side of the equation, news headlines have been dominated by predictions that human workers will be substituted by robots, leading to widespread job losses and heightened societal challenges. Additionally, doubt has been shed on the ability of regions that are less industrialized, or those with fractured economies and infrastructure, to be able to respond to these disruptions and compete effectively in the future.

For the Middle East and North Africa, it’s a critical question. Clearly, the region contains a mixture of countries in very different situations, ranging from those with active conflicts, challenged societal cohesion and decreasing incomes from natural resource reserves, to thriving, inclusive, relatively advanced economies.

However, the collision of some of the region’s characteristic megatrends with the 4IR phenomenon actually positions it to take a leading role in adopting and leveraging new technologies. Here are some examples:

A rapidly growing, young, tech-savvy population and the role of augmented reality/virtual reality (AR/VR) and wearables. More than 40% of people across the Middle East and North Africa are under the age of 25, and population growth is second only to sub-Saharan Africa. This growth is set to continue, with the total population forecast to reach 700 million by 2050. While clearly this indicates an urgent need to create jobs and build new capabilities, a new generation of millennial workers who have grown up with technology at their fingertips are arguably more likely to adapt to the needs of the new production age. For example, companies could use augmented reality to conduct “hyper-training” for employees, resulting in increased engagement, dramatically faster training times and a more capable workforce.

Urbanization, new infrastructure development and the internet of things (IoT). Around 263 million people – or 62% of the region’s population – are city-dwellers, and this urban populace is expected to double by 2040. This means more construction, and more opportunities to embed IoT devices into current and future builds, traffic management, energy management and other smart systems, to help the region take a leap forward. Cities around the world have begun to harness the power of digital connectivity. Barcelona uses smart lamp posts that sense pedestrians to adjust lighting, sample air quality and share information with city agencies and the public. Singapore uses smart bus fleets that identify issues and significantly reduce crowding and wait times. Dubai has installed new traffic signals that spot the movement of pedestrians and automatically modify the signal timing to encourage more people to walk and help reduce accidents.

Economic diversification, productivity and the role of robots. Volatile oil prices have placed strain on the Middle East’s oil-exporting countries, resulting in a redoubled focus on economic diversification. All of the Gulf Cooperation Council (GCC) countries have designed long-term plans to increase their diversification, attract investment, grow the SME sector and private-sector jobs, and increase GDP and exports. Converging new technologies offer an alternative to traditional routes to development, and make it possible for countries to enter new industry sectors with relative ease.

Examples include “speed factories” that use robots and 3D printing to rapidly produce customized goods, which can help accelerate localization of production. This automation can multiply productivity and enable countries in the region to enter new markets where they could not compete before, such as aerospace-parts manufacturing. Furthermore, the use of automation and autonomy for cargo and passenger movement could increase the region’s competitive advantage.

Of course, there are undoubtedly challenges for the region to overcome as it builds out a new future, all of which will be key in alleviating the foundational reasons of regional security conflicts. Three that are central to unleashing the transformational growth of the Fourth Industrial Revolution are:

Building the right capabilities. Some regions are behind on the path towards industrialization, and it is acknowledged that students and professionals must be better equipped with the relevant skills – not only in STEM subjects, but in the intrinsically human skills that will be in demand more than ever before as automation alters the role for humans in the workplace.

Supportive governance, regulation and policies. This comes down to governments adopting more progressive and inclusive frameworks that encourage demand, stimulate investment, boost enterprise development, reduce corruption and redress the imbalances of historical exclusion in some of its societies.

Pan-regional integration. Finally, in a world where regions are beginning to look inwards, the Middle East and North Africa can stand to gain from improving their regional economic exchanges. Economic integration in the region is among the lowest globally. Increased intra-regional mobility of goods, capital and workforce would enable it to boost economic growth and better cope with the disruptions of 4IR.

I am in no doubt that the Middle East and North Africa stand at the threshold of an enormous opportunity as the digital revolution beckons. I believe that it has the capacity to seize that opportunity and to continue its path towards a highly productive and more integrated future.

 Read the original article on the WEF website here.

Page 1 of 3
1 2 3