Category robots in business

Page 341 of 348
1 339 340 341 342 343 348

Spider webs as computers

Spiders are truly amazing creatures. They have evolved over more than 200 million years and can be found in almost every corner of our planet. They are one of the most successful animals. Not less impressive are their webs, highly intricate structures that have been optimised through evolution over approximately 100 million years with the ultimate purpose of catching prey.

However, interestingly, the closer you look at spiders’ webs the more details you can observe and the structures are much more complicated than one would expect from a simple snare. They are made of a variety of different types of silks, use water droplets to keep the tension [see: citation 4], and the structure is highly dynamic [see: citation 4]. Spider’s webs have a great deal more morphological complexity than what you would need to simply catch flies.

Since nature typically does not spoil resources the question arises: why are spiders’ webs so complex? Might they have other functionalities besides being a simple trap? One of the most interesting answers to this question is that spiders might use their webs as computational devices.

How does the spider use the web as a computer?

Despite the fact that most spiders have a lot of eyes (the majority has 8, but some have even up to 12), a lot of the spiders have bad sight. In order to understand what is going on in their webs, they use mechanoreceptors in their legs (called lyriforms) to “listen” to vibrations in the web.  Different species of spiders have different preferred places to sit and observe. While some can be found right at the center, others prefer to sit outside the actual web and to listen to one single thread. It is quite remarkable that based only on the information that comes through this single thread the spider seems to be able to deduce what is going on in their web and where these events are taking place.

For example, they need to know if there is a prey, like a fly, entangled in their web. Or if the vibrations are coming from a dangerous insect like a wasp and they should stay away. The web is also used to communicate with potential mates and the spider even excites the web and listens to the echo. This might be a way for the spider to check if threads are broken or if the tension in the web has to be increased.

From a computational point of view, the spider needs to classify different vibration pattern (e.g., prey vs predator vs mate) and to locate its origin (i.e., where the vibration started).

One way to understand how a spider’s web could help to carry out this computational functionality is the concept of morphological computation. This is a term that describes the understanding that mechanical structures all over in nature are carrying out useful computations. For example, they help to stabilise running, facilitate sensory data processing, and helps animals and plants to interact with complex and unpredictable environments.

One could say computation is outsourced to the physical body (e.g., from the brain to another part of the body).

From this point of view, the spider’s web can be seen as a nonlinear, dynamic filter. It can be understood as some kind of pre-processing unit that makes it easier for the animal to interpret the vibration signals. The web’s dynamic properties and its complex morphological structure mix vibration signals in a nonlinear fashion. It even has some memory. This can be easily seen by pinching the web. It responds with vibrations for some moments after the impact echoing the original input.  The web can also damp unwanted frequencies, which is crucial to get rid of noise. On the other hand, it might even be able to highlight other signals at certain frequencies that carry more relevant information about the events taking place on the web.

These are all useful computations and they make it easier for the spider to “read” and understand the vibration patterns. As a result, the brain of the animal has to do less work and it can concentrate on other tasks. In effect, the spider seems to devolve computation to the web.  This might be also the reason why spiders tend to their webs so intensively. They constantly observe it and adapt the tension if it has changed, e.g. due to change in humidity, and repair it as soon a thread is broken.

From spider webs to sensors

People have speculated for a while that spider webs might have additional functionalities. A great article that discusses that is “The Thoughts of a Spiderweb“.

However, nobody so far has systematically looked into the actual computational capabilities of the web. This is about to change. We recently started a Leverhulme Trust Research project that will investigate naturally spun spider webs of different species to understand how and which computing might take place in these structures.  Moreover, the project does not only try to understand the underlying computational principles but will also develop morphological computation-based sensor technology to measure flow and vibrations.

The project combines our research expertise in Morphological Computation at the University of Bristol and the expertise on spider webs at the Silk Group in Oxford.

In experimental setups we will use solenoids and laser Doppler vibrometers to measure vibrations in the web with very high precision. The goal is to understand how computation is carried out. We will systematically investigate how filtering capabilities, memory, and signal integration can happen in such structures. In parallel, we will develop a general simulation environment for vibrating structures. We will use this to ask specific questions about how different shapes and materials others than spider webs and silk can help to carry out computations. In addition, we will develop real prototypes of vibration and flow sensors, which will be inspired by these findings. It’s very likely they will look different from spider webs and they will use various types of materials.

Such sensors can be used in various applications. For example, morphological computation based flow sensors could be used to detect anomalies in the flow in tubes. Or vibration sensors put at strategic places on buildings could be able to detect earthquakes or structural failure. Also highly dynamic machines, for example, a wind turbine, could be monitored by such sensors to predict failure.

Ultimately, the project will provide not only a new technology to build sensors, but we hope also to get a fundamental understanding how spiders use their webs for computation.

References

[1] Hauser, H.; Ijspeert, A.; Füchslin, R.; Pfeifer, R. & Maass, W.”Towards a theoretical foundation for morphological computation with compliant bodies.”Biological Cybernetics, Springer Berlin / Heidelberg, 2011, 105, 355-370

[2]  Hauser, H.; Ijspeert, A.; Füchslin, R.; Pfeifer, R. & Maass, W. “The role of feedback in morphological computation with compliant bodies”. Biological Cybernetics, Springer Berlin / Heidelberg, 2012, 106, 595-613

[3] Hauser, H.; Füchslin, R.M.; Nakajima, K.“Morphological Computation – The Physical Body as a Computational Resource” Opinions and Outlooks on Morphological Computation, editors Hauser, H.; Füchslin, R.M. and Pfeifer, R., Chapter 20, pp 226-244,  2014, ISBN 978-3-033-04515-6

[4] Mortimer, B., Gordon, S. D., Holland, C., Siviour, C. R., Vollrath, F. and Windmill, J. F. C. (2014), The Speed of Sound in Silk: Linking Material Performance to Biological Function. Adv. Mater., 26: 5179–5183. doi:10.1002/adma.201401027

Drones that drive

Image: Alex Waller, MIT CSAIL

Being able to both walk and take flight is typical in nature – many birds, insects and other animals can do both. If we could program robots with similar versatility, it would open up many possibilities: picture machines that could fly into construction areas or disaster zones that aren’t near roads, and then be able to squeeze through tight spaces to transport objects or rescue people.

The problem is that usually robots that are good at one mode of transportation are, by necessity, bad at another. Drones are fast and agile, but generally have too limited of a battery life to travel for long distances. Ground vehicles, meanwhile, are more energy efficient, but also slower and less mobile.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are aiming to develop robots that can do both. In a new paper, the team presented a system of eight quadcopter drones that can both fly and drive through a city-like setting with parking spots, no-fly zones and landing pads.

“The ability to both fly and drive is useful in environments with a lot of barriers, since you can fly over ground obstacles and drive under overhead obstacles,” says PhD student Brandon Araki, lead author on a paper about the system out of CSAIL director Daniela Rus’ group. “Normal drones can’t maneuver on the ground at all. A drone with wheels is much more mobile while having only a slight reduction in flying time.”

Araki and Rus developed the system along with MIT undergraduate students John Strang, Sarah Pohorecky and Celine Qiu, as well as Tobias Naegeli of ETH Zurich’s Advanced Interactive Technologies Lab. The team presented their system at IEEE’s International Conference on Robotics and Automation (ICRA) in Singapore earlier this month.

How it works

The project builds on Araki’s previous work developing a “flying monkey” robot that crawls, grasps, and flies. While the monkey robot could hop over obstacles and crawl about, there was still no way for it to travel autonomously.

To address this, the team developed various “path-planning” algorithms aimed at ensuring that the drones don’t collide. To make them capable of driving, the team put two small motors with wheels on the bottom of each drone. In simulations the robots could fly for 90 meters or drive for 252 meters before their batteries ran out.

Adding the driving component to the drone slightly reduced its battery life, meaning that the maximum distance it could fly decreased 14 percent to about 300 feet. But since driving is still much more efficient than flying, the gain in efficiency from driving more than offsets the relatively small loss in efficiency in flying due to the extra weight.

“This work provides an algorithmic solution for large-scale, mixed-mode transportation and shows its applicability to real-world problems,” says Jingjin Yu, a computer science professor at Rutgers University who was not involved in the paper.

The team also tested the system using everyday materials like pieces of fabric for roads and cardboard boxes for buildings. They tested eight robots navigating from a starting point to an ending point on a collision-free path, and all were successful.

Rus says that systems like theirs suggest that another approach to creating safe and effective flying cars is not to simply “put wings on cars,” but to build on years of research in drone development to add driving capabilities to them.

“As we begin to develop planning and control algorithms for flying cars, we are encouraged by the possibility of creating robots with these capabilities at small scale,” says Rus. “While there are obviously still big challenges to scaling up to vehicles that could actually transport humans, we are inspired by the potential of a future in which flying cars could offer us fast, traffic-free transportation.”

Click here to read the paper.

The Drone Center’s Weekly Roundup: 6/24/17

Amazon’s “beehive” concept for future multi-storey fulfillment centers. Credit: Amazon

June 19, 2017 – June 25, 2017

At the Center for the Study of the Drone

In an interview with Robotics Tomorrow, Center for the Study of the Drone Co-Director Arthur Holland Michel discusses the growing use of drones by law enforcement and describes future trends in unmanned systems technology.

News

The U.S. State Department is set to approve the sale of 22 MQ-9B Guardian drones to India, according to Defense News. The sale is expected to be announced during Prime Minister Narendra Modi’s visit to the United States. The Guardian is an unarmed variant of the General Atomics Aeronautical Systems Predator B. If the deal is approved and finalized, India would be the fifth country besides the U.S. and first non-NATO member to operate the MQ-9.

The United States shot down another armed Iranian drone in Syria. A U.S. F-15 fighter jet intercepted the Shahed-129 drone near the town of Tanf, where the U.S.-led coalition is training Syrian rebel forces. The shootdown comes just days after the U.S. downed another Shahed-129 on June 8, as well as a Syrian SU-22 manned fighter jet on June 18. (Los Angeles Times)

Meanwhile, a spokesperson for Pakistan’s Ministry of Foreign Affairs confirmed that the Pakistani air force shot down an Iranian drone. According to Nafees Zakaria, the unarmed surveillance drone was downed 2.5 miles inside Pakistani territory in the southwest Baluchistan province. (Associated Press)

A U.S. Air Force RQ-4 Global Hawk drone crashed in the Sierra Nevada mountains in California. The RQ-4 is a high-altitude long-endurance surveillance drone. (KTLA5)

The U.S. House of Representatives and Senate introduced bills to reauthorize funding for the Federal Aviation Administration. Both bills include language on drones. The Senate bill would require all drone operators to pass an aeronautical knowledge test and would authorize the FAA to require that drone operators be registered. (Law360)

President Trump spoke with the CEOs of drone companies at the White House as part of a week focused on emerging technologies. Participants discussed a number of topics, including state and local drone laws and drone identification and tracking technologies. (TechCrunch)

The Pentagon will begin offering an award for remote weapons strikes to Air Force personnel in a variety of career fields, including cyber and space. The “R” device award was created in 2016 to recognize drone operators. (Military.com)

The U.S. Federal Aviation Administration has formed a committee to study electronic drone identification methods and technologies. The new committee is comprised of representatives from industry, government, and law enforcement. (Press Release)

Commentary, Analysis, and Art

At MarketWatch, Sally French writes that in the meeting at the White House, some CEOs of drone companies argued for more, not fewer, drone regulations. (MarketWatch)

At Air & Space Magazine, James R. Chiles writes that the crowded airspace above Syria could lead to the first drone-on-drone air war.

At Popular Science, Kelsey D. Atherton looks at how fighter jets of the future will be accompanied by swarms of low-cost armed drones.  

At Drone360, Leah Froats breaks down the different drone bills that have recently been introduced in Congress.

At Motherboard, Ben Sullivan writes that drone pilots are “buying Russian software to hack their way past DJI’s no fly zones.”

At Bloomberg Technology, Thomas Black writes that the future of drone delivery hinges on precise weather predictions.

At Aviation Week, James Drew writes that U.S. lawmakers are encouraging the Air Force to conduct a review of the different MQ-9 Reaper models that it plans to purchase.  

Also at Aviation Week, Tony Osborne writes that studies show that European governments are advancing the implementation of drone regulations.

At The Atlantic, Marina Koren looks at how artificial intelligence helps the Curiosity rover navigate the surface of Mars without any human input.

At Phys.org, Renee Cho considers how drones are helping advance scientific research.

At Ozy, Zara Stone writes that drones are helping to accelerate the time it takes to complete industrial painting jobs.

At the European Council on Foreign Relations, Ulrike Franke argues that instead of following the U.S. example, Europe should develop its own approach to acquiring military drones.

At the New York Times, Frank Bures looks at how a U.S. drone pilot is helping give the New Zealand team an edge in the America’s Cup.

At Cinema5D, Jakub Han examines how U.S. drone pilot Robert Mcintosh created an intricate single-shot fly-through video in Los Angeles.

Know Your Drone

Amazon has filed a patent for multi-storey urban fulfilment centers for its proposed drone delivery program. (CNN)

Airbus Helicopters has begun autonomous flight trials of its VSR700 optionally piloted helicopter demonstrator. (Unmanned Systems Technology)

Italian defense firm Leonardo unveiled the M-40, a target drone that can mimic the signatures of a number of aircraft types. (FlightGlobal)

Defense firm Textron Systems unveiled the Nightwarden, a new variant of its Shadow tactical surveillance and reconnaissance drone. (New Atlas)

Israeli defense firm Elbit Systems unveiled the SkEye, a wide-area persistent surveillance sensor that can be used aboard drones. (IHS Jane’s 360)

Researchers at the University of California, Santa Barbara have developed a WiFi-based  system that allows drones to see through solid walls. (TechCrunch)

Israeli drone maker Aeronautics unveiled the Pegasus 120, a multirotor drone designed for a variety of roles. (IHS Jane’s 360)  

U.S. firm Raytheon has developed a new variant of its Coyote, a tube-launched aerial data collection drone. (AIN Online)

Drone maker Boeing Insitu announced that it has integrated a 50-megapixel photogrammetric camera into a variant of its ScanEagle fixed-wing drone. (Unmanned Systems Technology)

Telecommunications giant AT&T is seeking to develop a system to mount drones on ground vehicles. (Atlanta Business Chronicle)

U.S. defense contractor Northrop Grumman demonstrated an unmanned surface vehicle in a mine-hunting exercise in Belgium. (AUVSI)

Israeli firm Rafael Advanced Defense Systems unveiled a new radar and laser-based counter-drone system called Drone Dome. (UPI)

French firm Reflet du Monde unveiled the RDM One, a small drone that can be flown at ranges of up to 300 kilometers thanks to a satellite link. (Defense News)

RE2 Robotics is helping the U.S. Air Force build robots that can take the controls of traditionally manned aircraft. (TechCrunch)

The U.S. Marine Corps is set to begin using its Nibbler 3D-printed drone in active combat zones in the coming weeks. (3D Printing Industry)

U.S. drone maker General Atomics Aeronautical Systems has completed a design review for its Advanced Cockpit Block 50 Ground Control Station for U.S. Air Force drones. (UPI)

Researchers at NASA’s Langley Research Center are developing systems for small drones that allows them to determine on their own if they are suffering from mechanical issues and find a place to land safely. (Wired)

The inventor of the Roomba robotic vacuum cleaner has unveiled an unmanned ground vehicle that autonomously finds and removes weeds from your garden. (Business Insider)

Drones at Work

A group of public safety agencies in Larimer County, Colorado have unveiled a regional drone program. (The Coloradoan)

Five marijuana growing operations in California will begin using unmanned ground vehicles for security patrols. (NBC Los Angeles)

The Fargo Fire Department in North Dakota has acquired a drone for a range of operations. (KFGO)

The Rochester Police Department in Minnesota has acquired a drone for monitoring patients suffering from Alzheimer’s and other disorders. (Associated Press)

Drone maker Parrot and software firm Pix4D have selected six researchers using drones to study the impacts of climate change as the winners of an innovation grant. (Unmanned Aerial Online)

The Coconino County Sheriff’s Office and the Flagstaff Police Department used an unmanned ground vehicle to enter the home of a man who had barricaded himself in a standoff. (AZ Central)

Industry Intel

The U.S. Special Operations Command awarded Boeing Insitu and Textron Systems contracts to compete for the Mid-Endurance Unmanned Aircraft Systems III drone program. (AIN Online)

The U.S. Navy awarded Arête Associates a $8.5 million contract for the AN/DVS-1 COBRA, a payload on the MQ-8 Fire Scout. (DoD)

The U.S. Army awarded Raytheon a $2.93 million contract for Kinetic Drone Defense. (FBO)

The Spanish Defense Ministry selected the AUDS counter-drone system for immediate deployments. The contract is estimated to be worth $2.24 million. (GSN Magazine)

The European Maritime Safety Agency selected the UMS Skeldar for border control, search and rescue, pollution monitoring, and other missions. (FlightGlobal)

The Belgian Navy awarded SeeByte, a company that creates software for unmanned maritime systems, a contract for the SeeTrack software system for its autonomous undersea vehicles. (Marine Technology News)

A new company established by the Turkish government will build engines for the armed Anka drone. (DefenseNews)

Italian defense firm Leonardo is seeking to market its Falco UAV for commercial applications. (Shephard Media)  

Thales Alenia Space will acquire a minority stake in Airstar Aerospace, which it hopes will help it achieve its goal of developing an autonomous, high-altitude airship. (Intelligent Aerospace)

The Idaho STEM Action Center awarded 22 schools and libraries in Idaho $147,000 to purchase drones. (East Idaho News)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Survey: Examining perceptions of autonomous vehicles using hypothetical scenarios

Driverless car merging into traffic. How big of a gap between vehicles is acceptable? Image credit: Jordan Collver

I’m examining the perception of autonomous cars using hypothetical scenarios. Each of the hypothetical scenarios is accompanied with an image to help illustrate the scene — using grey tones and nondescript human-like features — along with the option to listen to the question spoken out loud to fully visualise an association. 

If you live in the UK, you can take this survey and help contribute to my research!

Public perception has the potential to impact on the timescale and adoption of autonomous vehicles (AV). As the development of the technology advances, understanding attitudes and wider public acceptability is critical. It’s no longer a question of if, but when we will transition. Long range autonomous vehicles are expected between 2020 and 2025, with some estimates suggesting fully autonomous vehicles will take over by 2030. Currently, most modern cars are sold with automated features: automatic braking, autonomous parking, advanced lane assist, advanced cruise control, queue assist, for example. Adopting fully AV has the potential to improve significant societal aspects: efficient road safety, reducing pollution and congestion, and providing another type of transportation for the mobility impaired.

The project’s aim is to add to the conversation about public perception of AV. Survey experiments can be extremely useful tools for studying public attitudes, especially if researchers are fascinated by the “effects of describing or presenting a scenario in a particular way.”  This unusual and creative method may provide a model for other types of research surveys in the future where it’s difficult to visualise future technologies. An online survey was chosen to remove small sample bias and maximise responses by participants in the UK.

You can take this survey by clicking above, or alternatively, click the following link:

https://uwe.onlinesurveys.ac.uk/visualise-this

CARNAC program researching autonomous co-piloting

Credit: Aurora Flight Sciences.

DARPA, the Defense Advanced Research Projects Agency, is researching autonomous co-piloting so they can fly without a human pilot on board. The robotic system — called the Common Aircraft Retrofit for Novel Autonomous Control (CARNAC) (not to be confused with the old Johnny Carson Carnac routine) — has the potential to reduce costs, enable new missions, and improve performance.

CARNAC, the Johnny Carson version.

Unmanned aircraft are generally built from scratch with robotic systems integrated from the earliest design stages. Existing aircraft require extensive modification to add robotic systems.

RE2, the CMU spin-off located in Pittsburgh, makes mobile manipulators for defense and space. They just received an SBIR loan backed by a US Air Force development contract to develop a retrofit kit that would provide a robotic piloting solution for legacy aircraft.

“Our team is excited to incorporate the Company’s robotic manipulation expertise with proven technologies in applique systems, vision processing algorithms, and decision making to create a customized application that will allow a wide variety of existing aircraft to be outfitted with a robotic pilot,” stated Jorgen Pedersen, president and CEO of RE2 Robotics. “By creating a drop-in robotic pilot, we have the ability to insert autonomy into and expand the capabilities of not only traditionally manned air vehicles, but ground and underwater vehicles as well. This application will open up a whole new market for our mobile robotic manipulator systems.”

Aurora Flight Sciences, a Manassas, VA developer of advanced unmanned systems and aerospace vehicles, is working on another similar DARPA project, Aircrew Labor In-Cockpit Automation System (ALIAS), and is designed as a drop-in avionics and mechanics package that can be quickly and cheaply fitted to a wide variety of fixed and rotor aircraft, from a Cessna to a B-52. Once installed, ALIAS is able to analyze the aircraft and adapt itself to the job of the second pilot.

Credit: Aurora Flight Sciences

Assistive robots compete in Bristol

The Bristol Robotics Laboratory (BRL) will host the first European- Commission funded European Robotics League (ERL) tournament for service robots to be held in the UK.

Two teams from the BRL and Birmingham will pitch their robots against each other in a series of events from 26 and 30 June.

Robots designed to support people with care-related tasks in the home will be put to the test in a simulated home test bed.

The assisted living robots of the two teams will face various challenges, including understanding natural speech and finding and retrieving objects for the user.

The robots will also have to greet visitors at the door appropriately, such as welcoming a doctor on their visit, or turning away unwanted visitors.

Associate Professor Praminda Caleb-Solly, Theme Leader for Assistive Robotics at the BRL said, “The lessons learned during the competition will contribute to how robots in the future help people, such as those with ageing-related impairments and those with other disabilities, live independently in their own homes for as long as possible.

“This is particularly significant with the growing shortage of carers available to provide support for an ageing populations.”

The BRL, the host of the UK’s first ERL Service Robots tournament, is a joint initiative of the University of the West of England and the University of Bristol. The many research areas include swarm robotics, unmanned aerial vehicles, driverless cars, medical robotics and robotic sensing for touch and vision. BRL’s assisted living research group is developing interactive assistive robots as part of an ambient smart home ecosystem to support independent living.

The ERL Service Robots tournament will be held in the BRL’s Anchor Robotics Personalised Assisted Living Studio, which was set up to develop, test and evaluate assistive robotic and other technologies in a realistic home environment.

The studio was recently certified as a test bed by the ERL, which runs alongside similar competitions for industrial robots and for emergency robots, which includes vehicles that can search for and rescue people in disaster-response scenarios.

The two teams in the Bristol event will be Birmingham Autonomous Robotics Club (BARC) led by Sean Bastable from the School of Computer Science at the University of Birmingham, and the Healthcare Engineering and Assistive Robotics Technology and Services (HEARTS) team from the BRL led by PhD Student Zeke Steer.

BARC has developed its own robotics platform, Dora, and HEARTS will use a TIAGo Steel robot from PAL Robotics with a mix of bespoke and proprietary software.

The Bristol event will be open for public viewing in the BRL on the afternoon of the 29th of June 2017 (Bookable via EventBrite), and include short tours of the assisted living studio for the attendees. It will be held during UK Robotics Week, on 24-30 June 2017, when there will be a nationwide programme of robotics and automation events.

The BRL will also be organising focus groups on 28 and 29 June 2017 (Bookable via EventBrite and here) as part of the UK Robotics Week, to demonstrate assistive robots and their functionality, and seek the views of carers and older adults on these assistive technologies, exploring further applications and integration of such robots into care scenarios.

The European Commission-funded European Robotics League (ERL) is the successor to the RoCKIn, euRathlon and EuRoC robotics competitions, all funded by the EU and designed to foster scientific progress and innovation in cognitive systems and robotics. The ERL is funded by the European Union’s Horizon 2020 research and innovation programme. See: https://www.eu-robotics.net/robotics_league/

The ERL is part of the SPARC public-private partnership set up by the European Commission and the euRobotics association to extend Europe’s leadership in civilian robotics. SPARC’s €700 million of funding from the Commission in 2014̶20 is being combined with €1.4 billion of funding from European industry. See: http://www.eu-robotics.net/sparc

euRobotics is a European Commission-funded non-profit organisation which promotes robotics research and innovation for the benefit of Europe’s economy and society. It is based in Brussels and has more than 250 member organisations. See: www.eu-robotics.net

Robots Podcast #237: Deep Learning in Robotics, with Sergey Levine

In this episode, Audrow Nash interviews Sergey Levine, assistant professor at UC Berkeley, about deep learning on robotics. Levine explains what deep learning is and he discusses the challenges of using deep learning in robotics. Lastly, Levine speaks about his collaboration with Google and some of the surprising behavior that emerged from his deep learning approach (how the system grasps soft objects).

In addition to the main interview, Audrow interviewed Levine about his professional path. They spoke about what questions motivate him, why his PhD experience was different to what he had expected, the value of self-directed learning,  work-life balance, and what he wishes he’d known in graduate school.

A video of Levine’s work in collaboration with Google.

 

Sergey Levine

Sergey Levine is an assistant professor at UC Berkeley. His research focuses on robotics and machine learning. In his PhD thesis, he developed a novel guided policy search algorithm for learning complex neural network control policies, which was later applied to enable a range of robotic tasks, including end-to-end training of policies for perception and control. He has also developed algorithms for learning from demonstration, inverse reinforcement learning, efficient training of stochastic neural networks, computer vision, and data-driven character animation.

 

 

Links

More efficient and safer: How drones are changing the workplace

Photo credit: Pierre-Yves Guernier

Technology-driven automation plays a critical role in the global economy, and its visibility in our lives is growing. As technology impacts more and more jobs, individuals and enterprises find themselves wondering what effect the current wave of automation will have on their future economic prospects.

Advances in robotics and AI have led to modern commercial drone technology, which is changing the fundamental way enterprises interact with the world. Drones bridge the physical and digital worlds. They enable companies to combine the power of scalable computing resources with pervasive, affordable sensors that can go anywhere. This creates an environment in which businesses can make quick, accurate decisions based on enormous datasets derived from the physical world.

Removing dangers

For individuals in jobs that involve lots of time spent traveling to the extremities of where enterprises do business, or to a precarious perch to get a good view, like infrastructure inspection or site management, an opportunity presents itself.

Historically, it’s been a dangerous job to identify the state of affairs in the physical world and analyze and report on that information. It may have required climbing on tall buildings or unstable areas, or travelling to far-flung sites to inspect critical infrastructure, like live power lines or extensive dams.

Commercial drones, as part of the current wave of automation technology, will fundamentally change this process. The jobs involved aren’t going away, but they are going to change.

A January 2017 study by McKinsey on Automation, Employment, and Productivityreported that less than 5% of all occupations can be automated entirely using demonstrated technologies, but two-thirds of all jobs could have 30% of their work automated. Many jobs will not only be more efficient, they are going to be safer, and the skills required are going to be more mental than physical.

New ways to amass data

Jobs that were once considered gruelling and monotonous will look more like knowledge-worker jobs in the near future. Until now, people in these jobs have had to go to great lengths to collect data for analysis and decision-making. That data can now be collected without putting people in harm’s way. Without the need to don a harness, or climb to dangerous heights, people in these jobs can extend their career.

We’ve seen this firsthand in our own work conducting commercial drone operation training for many of the largest insurers in America, whose teams typically include adjusters in the latter stages of their career.

When you’re 50 years old, the physical demands of climbing on roofs to conduct inspections can make you think about an early retirement, or a career change.

Keeping hard-earned skills in the workplace

But these workers are some of the best in the business, with decades of experience. No one wants to leave hard-earned skills behind due to physical limitations.

We’ve found industry veterans like these to be some of the most enthusiastic adopters of commercial drones for rooftop inspections. After one week-long session, these adjusters could operate a commercial drone to collect rooftop data without requiring any climbing. Their deep understanding of claims adjustment can be brought to bear in the field without the conventional physical demands.

Specialists with knowledge and experience like veteran insurance adjusters are far harder to find than someone who can learn how to use a commercial drone system. Removing the need to physically collect the data means the impact of their expertise can be global, and the talent competition for these roles will be global as well.

Digital skills grow in importance

Workers can come out on top in this shift by focusing on improving relevant digital skills. Their conventional daily-use manual tools will become far less important than those tools that enable them to have an impact digitally.

The tape measure and ladder will go by the wayside as more work is conducted with iPads and cloud software. This transition will also create many more opportunities to do work that simply doesn’t get accomplished today.

Take commercial building inspection as an example.

In the past, the value of a building inspection had to be balanced against many drawbacks, like the cost of stopping business so an inspection could be conducted, the liability of sending a worker to a roof, and the sheer size of sites.

Filling the data gap

The result is a significant data gap. The state of the majority of commercial buildings is simply unknown to their owners and underwriters.

Using drones for inspections dramatically reduces the inherent challenges of data collection, which makes it feasible to inspect far more buildings and creates a demand for human workers to analyze this new dataset. Filling this demand requires specialized knowledge and a niche skillset that the existing workers in this field, like the veterans from our training groups who were on the verge of leaving the field, are best-poised to provide.

This trend is happening in myriad industries, from insurance, to telecoms, to mining and construction.

Preparation now

Enterprises in industries that will be impacted by this technology need to make their preparations for this transformation now. Those that do not, will not be around in 10 years.

Workers in jobs where careers are typically cut short due to physical risk need to invest in learning digital skills, so that they can extend the length of their career and increase their value, while reducing the inherent physical toll. Individuals who see their employers falling behind in innovation have the freedom to pursue a career with a more ambitious competitor, or to take a leadership role kickstarting initiatives internally to keep pace.

There’s no shortage of challenges to tackle or problems to solve in the world.

Commercial drones, and the greater wave of automation technology, will enable us to address more of them. This will create many opportunities for the workers who are prepared to capitalize on this technology. That preparation must begin now.

Helping or hacking? Engineers and ethicists must work together on brain-computer interface technology

File 20170609 4841 73vkw2
A subject plays a computer game as part of a neural security experiment at the University of Washington.
Patrick Bennett, CC BY-ND

By Eran Klein, University of Washington and Katherine Pratt, University of Washington

 

In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.

Recent announcements by Elon Musk and Facebook about brain-computer interface (BCI) technology are just the latest headlines in an ongoing science-fiction-becomes-reality story.

BCIs use brain signals to control objects in the outside world. They’re a potentially world-changing innovation – imagine being paralyzed but able to “reach” for something with a prosthetic arm just by thinking about it. But the revolutionary technology also raises concerns. Here at the University of Washington’s Center for Sensorimotor Neural Engineering (CSNE) we and our colleagues are researching BCI technology – and a crucial part of that includes working on issues such as neuroethics and neural security. Ethicists and engineers are working together to understand and quantify risks and develop ways to protect the public now.

Picking up on P300 signals

All BCI technology relies on being able to collect information from a brain that a device can then use or act on in some way. There are numerous places from which signals can be recorded, as well as infinite ways the data can be analyzed, so there are many possibilities for how a BCI can be used.

Some BCI researchers zero in on one particular kind of regularly occurring brain signal that alerts us to important changes in our environment. Neuroscientists call these signals “event-related potentials.” In the lab, they help us identify a reaction to a stimulus.

Examples of event-related potentials (ERPs), electrical signals produced by the brain in response to a stimulus. Tamara Bonaci, CC BY-ND

In particular, we capitalize on one of these specific signals, called the P300. It’s a positive peak of electricity that occurs toward the back of the head about 300 milliseconds after the stimulus is shown. The P300 alerts the rest of your brain to an “oddball” that stands out from the rest of what’s around you.

For example, you don’t stop and stare at each person’s face when you’re searching for your friend at the park. Instead, if we were recording your brain signals as you scanned the crowd, there would be a detectable P300 response when you saw someone who could be your friend. The P300 carries an unconscious message alerting you to something important that deserves attention. These signals are part of a still unknown brain pathway that aids in detection and focusing attention.

Reading your mind using P300s

P300s reliably occur any time you notice something rare or disjointed, like when you find the shirt you were looking for in your closet or your car in a parking lot. Researchers can use the P300 in an experimental setting to determine what is important or relevant to you. That’s led to the creation of devices like spellers that allow paralyzed individuals to type using their thoughts, one character at a time.

It also can be used to determine what you know, in what’s called a “guilty knowledge test.” In the lab, subjects are asked to choose an item to “steal” or hide, and are then shown many images repeatedly of both unrelated and related items. For instance, subjects choose between a watch and a necklace, and are then shown typical items from a jewelry box; a P300 appears when the subject is presented with the image of the item he took.

Everyone’s P300 is unique. In order to know what they’re looking for, researchers need “training” data. These are previously obtained brain signal recordings that researchers are confident contain P300s; they’re then used to calibrate the system. Since the test measures an unconscious neural signal that you don’t even know you have, can you fool it? Maybe, if you know that you’re being probed and what the stimuli are.

Techniques like these are still considered unreliable and unproven, and thus U.S. courts have resisted admitting P300 data as evidence.

For now, most BCI technology relies on somewhat cumbersome EEG hardware that is definitely not stealth. Mark Stone, University of Washington, CC BY-ND

Imagine that instead of using a P300 signal to solve the mystery of a “stolen” item in the lab, someone used this technology to extract information about what month you were born or which bank you use – without your telling them. Our research group has collected data suggesting this is possible. Just using an individual’s brain activity – specifically, their P300 response – we could determine a subject’s preferences for things like favorite coffee brand or favorite sports.

But we could do it only when subject-specific training data were available. What if we could figure out someone’s preferences without previous knowledge of their brain signal patterns? Without the need for training, users could simply put on a device and go, skipping the step of loading a personal training profile or spending time in calibration. Research on trained and untrained devices is the subject of continuing experiments at the University of Washington and elsewhere.

It’s when the technology is able to “read” someone’s mind who isn’t actively cooperating that ethical issues become particularly pressing. After all, we willingly trade bits of our privacy all the time – when we open our mouths to have conversations or use GPS devices that allow companies to collect data about us. But in these cases we consent to sharing what’s in our minds. The difference with next-generation P300 technology under development is that the protection consent gives us may get bypassed altogether.

What if it’s possible to decode what you’re thinking or planning without you even knowing? Will you feel violated? Will you feel a loss of control? Privacy implications may be wide-ranging. Maybe advertisers could know your preferred brands and send you personalized ads – which may be convenient or creepy. Or maybe malicious entities could determine where you bank and your account’s PIN – which would be alarming.

With great power comes great responsibility

The potential ability to determine individuals’ preferences and personal information using their own brain signals has spawned a number of difficult but pressing questions: Should we be able to keep our neural signals private? That is, should neural security be a human right? How do we adequately protect and store all the neural data being recorded for research, and soon for leisure? How do consumers know if any protective or anonymization measures are being made with their neural data? As of now, neural data collected for commercial uses are not subject to the same legal protections covering biomedical research or health care. Should neural data be treated differently?

Neuroethicists from the UW Philosophy department discuss issues related to neural implants.
Mark Stone, University of Washington, CC BY-ND

These are the kinds of conundrums that are best addressed by neural engineers and ethicists working together. Putting ethicists in labs alongside engineers – as we have done at the CSNE – is one way to ensure that privacy and security risks of neurotechnology, as well as other ethically important issues, are an active part of the research process instead of an afterthought. For instance, Tim Brown, an ethicist at the CSNE, is “housed” within a neural engineering research lab, allowing him to have daily conversations with researchers about ethical concerns. He’s also easily able to interact with – and, in fact, interview – research subjects about their ethical concerns about brain research.

There are important ethical and legal lessons to be drawn about technology and privacy from other areas, such as genetics and neuromarketing. But there seems to be something important and different about reading neural data. They’re more intimately connected to the mind and who we take ourselves to be. As such, ethical issues raised by BCI demand special attention.

Working on ethics while tech’s in its infancy

As we wrestle with how to address these privacy and security issues, there are two features of current P300 technology that will buy us time.

First, most commercial devices available use dry electrodes, which rely solely on skin contact to conduct electrical signals. This technology is prone to a low signal-to-noise ratio, meaning that we can extract only relatively basic forms of information from users. The brain signals we record are known to be highly variable (even for the same person) due to things like electrode movement and the constantly changing nature of brain signals themselves. Second, electrodes are not always in ideal locations to record.

All together, this inherent lack of reliability means that BCI devices are not nearly as ubiquitous today as they may be in the future. As electrode hardware and signal processing continue to improve, it will be easier to continuously use devices like these, and make it easier to extract personal information from an unknowing individual as well. The safest advice would be to not use these devices at all.

The ConversationThe goal should be that the ethical standards and the technology will mature together to ensure future BCI users are confident their privacy is being protected as they use these kinds of devices. It’s a rare opportunity for scientists, engineers, ethicists and eventually regulators to work together to create even better products than were originally dreamed of in science fiction.

Shrinking data for surgical training

Image: MIT News

Laparoscopy is a surgical technique in which a fiber-optic camera is inserted into a patient’s abdominal cavity to provide a video feed that guides the surgeon through a minimally invasive procedure. Laparoscopic surgeries can take hours, and the video generated by the camera — the laparoscope — is often recorded. Those recordings contain a wealth of information that could be useful for training both medical providers and computer systems that would aid with surgery, but because reviewing them is so time consuming, they mostly sit idle.

Researchers at MIT and Massachusetts General Hospital hope to change that, with a new system that can efficiently search through hundreds of hours of video for events and visual features that correspond to a few training examples.

In work they presented at the International Conference on Robotics and Automation this month, the researchers trained their system to recognize different stages of an operation, such as biopsy, tissue removal, stapling, and wound cleansing.

But the system could be applied to any analytical question that doctors deem worthwhile. It could, for instance, be trained to predict when particular medical instruments — such as additional staple cartridges — should be prepared for the surgeon’s use, or it could sound an alert if a surgeon encounters rare, aberrant anatomy.

“Surgeons are thrilled by all the features that our work enables,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and senior author on the paper. “They are thrilled to have the surgical tapes automatically segmented and indexed, because now those tapes can be used for training. If we want to learn about phase two of a surgery, we know exactly where to go to look for that segment. We don’t have to watch every minute before that. The other thing that is extraordinarily exciting to the surgeons is that in the future, we should be able to monitor the progression of the operation in real-time.”

Joining Rus on the paper are first author Mikhail Volkov, who was a postdoc in Rus’ group when the work was done and is now a quantitative analyst at SMBC Nikko Securities in Tokyo; Guy Rosman, another postdoc in Rus’ group; and Daniel Hashimoto and Ozanan Meireles of Massachusetts General Hospital (MGH).

Representative frames

The new paper builds on previous work from Rus’ group on “coresets,” or subsets of much larger data sets that preserve their salient statistical characteristics. In the past, Rus’ group has used coresets to perform tasks such as deducing the topics of Wikipedia articles or recording the routes traversed by GPS-connected cars.

In this case, the coreset consists of a couple hundred or so short segments of video — just a few frames each. Each segment is selected because it offers a good approximation of the dozens or even hundreds of frames surrounding it. The coreset thus winnows a video file down to only about one-tenth its initial size, while still preserving most of its vital information.

For this research, MGH surgeons identified seven distinct stages in a procedure for removing part of the stomach, and the researchers tagged the beginnings of each stage in eight laparoscopic videos. Those videos were used to train a machine-learning system, which was in turn applied to the coresets of four laparoscopic videos it hadn’t previously seen. For each short video snippet in the coresets, the system was able to assign it to the correct stage of surgery with 93 percent accuracy.

“We wanted to see how this system works for relatively small training sets,” Rosman explains. “If you’re in a specific hospital, and you’re interested in a specific surgery type, or even more important, a specific variant of a surgery — all the surgeries where this or that happened — you may not have a lot of examples.”

Selection criteria

The general procedure that the researchers used to extract the coresets is one they’ve previously described, but coreset selection always hinges on specific properties of the data it’s being applied to. The data included in the coreset — here, frames of video — must approximate the data being left out, and the degree of approximation is measured differently for different types of data.

Machine learning can be thought of as a problem of approximation, however. In this case, the system had to learn to identify similarities between frames of video in separate laparoscopic feeds that denoted the same phases of a surgical procedure. The metric of similarity that it arrived at also served to assess the similarity of video frames that were included in the coreset, to those that were omitted.

“Interventional medicine — surgery in particular — really comes down to human performance in many ways,” says Gregory Hager, a professor of computer science at Johns Hopkins University who investigates medical applications of computer and robotic technologies. “As in many other areas of human endeavor, like sports, the quality of the human performance determines the quality of the outcome that you achieve, but we don’t know a lot about, if you will, the analytics of what creates a good surgeon. Work like what Daniela is doing and our work really goes to the question of: Can we start to quantify what the process in surgery is, and then within that process, can we develop measures where we can relate human performance to the quality of care that a patient receives?”

“Right now, efficiency” — of the kind provided by coresets — “is probably not that important, because we’re dealing with small numbers of these things,” Hager adds. “But you could imagine that, if you started to record every surgery that’s performed — we’re talking tens of millions of procedures in the U.S. alone — now it starts to be interesting to think about efficiency.”

RoboCup video series: Junior league

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.

To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.

In our final set of videos, we are featuring the RoboCupJunior league! RoboCupJunior is a project-oriented educational initiative that sponsors local, regional and international robotic events for young students. It is designed to introduce RoboCup to primary and secondary school children, as well as undergraduates who do not have the resources to get involved in the senior leagues yet.

Short version:

Long version:

You can view all the videos on the RoboCup playlist below:

https://www.youtube.com/playlist?list=PLEfaZULTeP_-bqFvCLBWnOvFAgkHTWbWC

Please spread the word! If you would like to join a team, click here for more information.

If you liked reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Robots offer the elderly a helping hand

Humanoid robots under development can be programmed to detect changes in an elderly person’s preferences and habits. Image credit: GrowMeUp

by Helen Massy-Beresford

Low birth rates and higher life expectancies mean that those over 65 years old now will account for 28.7 % of Europe’s population by 2080, according to Eurostat, the EU’s statistics arm.

It means the age-dependency ratio – the proportion of the elderly compared with the number of workers – will almost double from 28.8 % in 2015 to 51 % in 2080, straining healthcare systems and national budgets.

Yet there’s hope marching over the horizon, in the form of robots.

The creators of one humanoid robot under development for the elderly say it can understand people’s actions and learn new behaviours in response, even though it is devoid of arms.

Robots can be programmed to understand an elderly person’s preferences and habits to detect changes in behaviour: for example if a yoga devotee misses a class, it will ask why, while if an elderly person falls it will automatically alert caregivers or emergency services.

Yet there’s still a way to go before these devices will be able to bring out a tray of tea and biscuits when visitors drop by, according to its creator.

At the moment there are things the robot can perform perfectly in the lab but that still present challenges, says Dr Luís Santos from the University of Coimbra in Portugal, who has been developing the technology as part of an EU-funded research project known as GrowMeUp.

The proportion of elderly people is expected to almost double by 2080, so researchers are looking to robots to see if they can help care for the aging population. Image credit: GrowMeUp

‘There is a mismatch between what elderly people want and what science and technology can provide – some of them are expecting robots to do all types of household activities, engage them in everyday gossip or physically interact with them as another human would do,’ says Dr Santos.

The team is working on making the robot’s dialogue as natural and as intuitive as possible and improving its ability to safely navigate an older person’s home, using a low-cost laser and a camera, and a second prototype will be tested with elderly people in the coming months. Yet, Dr Santos foresees that these devices are still four to six years away from commercialisation, at least.

Revolution

He sees robotics as just a part of a wider revolution underway in how societies care for the elderly, with connectivity and augmented reality also playing a role.

‘In the future, elderly care will also be very focused on information and communications technologies – for example virtual access to doctors or care institutions and 24/7 monitoring in a non-invasive way are likely to become standard,’ he said.

Yet researchers believe that keeping the technology unobtrusive is key – no wearable devices or cumbersome cameras cluttering up people’s homes.

Dr Maria Dagioglou from the National Centre of Scientific Research ‘Demokritos’ in Greece, said: ‘We wanted to avoid a Big Brother scenario, so data privacy is important but also dignity.’

She is looking at ways to integrate robotics technology into a smart home equipped with connected devices, automation and sensors, as part of the EU-funded RADIO project.

Researchers are figuring out ways of putting robots in homes for virtual access to healthcare and constant monitoring, yet that are also non-invasive. Image credit: RADIO

Dr Stasinos Konstantopoulos, the scientific manager of the RADIO project, added: ‘All monitoring happens as the user interacts with the system to control the house, for example, to regulate the temperature, and to ask the robot to run errands, like finding misplaced items.’

Using a tablet or smartphone to interact, the equipment, which should only take a day to install, can monitor elements of an elderly person’s day-to-day life, efficiently processing and managing data to allow medical professionals to keep track of and assess their level of independence via smartphone notifications.

‘It’s a constant safety net in case something starts to be worrying,’ said Dr Dagioglou.

The goal of innovations like this is to allow people to live independently for longer.

A crucial element of this is finding ways for older people to keep up their activity levels, and this is an area where robots could really come into their own.

Dr Luigi Palopoli at the University of Trento in Italy said ‘our robot pushes them to do their exercise, to go out and about; it extracts information on their interests and on their fears and makes them part of a network.’

Barriers

‘We want to tear down the emotional barriers that make them stay at home and degrade the quality of their life,’ he said.

As part of the EU-funded Acanto project, he is developing a robot called FriWalk, following on from the progress made during a previous EU-funded project, the DALi project.

The team has worked hard to make the FriWalk robot look energetic and appealing and to ensure it offers useful services like carrying small items or giving directions.

With prototypes made, in the next few months, the researchers will start clinical trials in Spain as well as public demonstrations of the FriWalk in museums and other public spaces.

Researchers will start clinical trials in Spain of a robotic prototype designed to help elderly people to exercise. Image credit: Acanto

Further ahead, Dr Palopoli hopes for interest from established manufacturers and start-ups to bring the FriWalk technology to the market.

If you liked this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Notes and pics from Xponential in Dallas, Innorobo in Paris and ICRA in Singapore

Conferences and trade shows, held in interesting locations around the world, can be entertaining, informative and an opportunity to explore new places, meet new people and renew acquaintances. Three recent examples: Xponential, the mostly defense-related unmanned land, sea and air show, held in Dallas; Innorobo, focused on service robotics, in Paris; and ICRA, the IEEE’s premier robotics conference, in Singapore.

ICRA

The 2017 IEEE International Conference on Robotics and Automation (ICRA), the IEEE’s principle forum for robotics researchers to present their work, was held this year at the Marina Bay Sands Hotel and Convention Center in Singapore. ICRA continues to have the highest number of cited research papers in the robotics field of all the various global conferences (including IROS).

In an IEEE/Spectrum review of that portion of the conference that was biomedical-related, a swallowable capsule robot capable of needle aspiration, guided by magnets, and an autonomous snake-like colonoscopy robot were two of the hits. Another reviewer found the rehab exoskeletons, haptic interfaces, modular robot components and many of the ROS-enabled solutions of merit. Overall, almost 3,000 robotics researchers attended ICRA 2017 and most found many things of interest (including Singapore and the Marina Sands Hotel).

Xponential – all things unmanned

The Association for Unmanned Vehicle Systems Internation, AUVSI, annual trade show and conference, Xponential, held this year in Dallas, Texas, showed the changing nature of the industry and offered suggestions (guesses) as to where they were going. 170,000 visitors attended while 100 speakers and over 650 exhibitors put on this choreographed show of military weaponry, defense and security systems and equipment, and commercial unmanned air, land and sea systems.

Click to enlarge

AUVSI’s membership fees are discounted for members of the military and first responders and the exhibitor list continues to favor military/defense-related companies, but most of those companies now have a growing commercial component.

Autonomous vehicles have always been the constituency of AUVSI but with all the money flowing into autonomous car startups, and the talent search to corral people to make this new industry happen, a small portion of this Xponential show was devoted to the prospect of that future (see chart above).

The folks at The National Robotics Education Foundation (NREF) produced a gallery of over 300 interesting photos from Xponential. They also produced a special set of pictures of UAS engines from the show. Unmanned vehicles used by the military, for search and rescue, in support of agriculture and mining, for infrastructure inspection, and in a variety of other circumstances must stay aloft for long periods, hence the interest in engines that can support that amount of air time.

Innorobo

I visited Innorobo. It is a necessary show in a rapidly changing arena. Over 7,000 visitors perused an eclectic group of 170 startups, integrators, component manufacturers and service robot providers exhibiting a wide range of products and services at a site on the outskirts of Paris. Over the 3-day show, 50 speakers explored topics from robotics-related AI to philosophical discussions about law and ethics to the latest innovations in personal and professional service robotics.

The IFR (International Federation of Robotics) says that robot installations in France increased by 22% to 1,400 units in 2016 (compared to 700 units in the UK), particularly within the car industry. France ranks 2nd within the EU for robot density (the UK is 10th). Innorobo started as a show to promote France’s robotics industry (there are 225+ French companies in The Robot Report’s Directories and on our Global Map). Held in Lyon, the show grew to its present size through the hard work and willpower of a small group of inventive women entrepreneurs. It grew and relocated to Paris where it’s been for the last two years. As the focus expanded from promoting in-country robotics to displaying global innovations in robotics of the startup companies, research labs and service robot companies beginning to make inroads aroud the world – the show has become a valuable mainstay for the European press, investors, business executives, students, and roboticists alike.

Events, Directories and The Robot Report’s Global Map

From time to time it becomes relevant to toot the horn of the free resources available on The Robot Report. Our events calendar, directory of companies and educational institutions involved in the robotics industry, and the global map for job seekers and researchers alike are free and always updated.

There are still 28 robotics-related events remaining in 2017. Check them out:

It’s not just self-driving cars; trains could soon be autonomous too!

Judging by the frequency that self-driving cars are mentioned in scientific discussions and the media, they are not only the next big thing, but might actually take over as our main means of transportation. Traditional industries like the railways, on the other hand, seem to have lost that race already. But what if new technologies, such as Internet of Things (IoT) devices and Artificial Intelligence (AI), were not only used to create new transportation modes, but to transform old ones as well?

If we get this digitization right, then trains, as the winners of the first industrial revolution, could in fact be here to stay.

Long-distance

It is true that the technology behind autonomous cars has enormous potential and that they might emerge as the winners when it comes to shorter distances. But I believe that trains have a very real chance at becoming the transportation of choice for long-distance travel.

How exactly could digitization make this happen?

With the help of new digital processes, rail companies could increase the capacity of their networks and resolve traffic bottlenecks. This will, in turn, help more people reach their destinations sooner. The use of emerging technologies could also mean that the trains of the future will not only be more comfortable, but also more energy-efficient, safer and faster than cars over long distances.

Some pieces of this puzzle are already in place.

What is left for the rail industry is to identify the change still necessary to become ready for the future, and to accept IoT technologies and AI as its chief enablers.

What the rail industry already has going for it

1. Rail is energy-efficient. Government institutions examine the energy efficiency of different transportation means on a frequent basis. A recent US study, for instance, shows that high-speed trains are up to eight times more efficient than commercial planes, and four times more energy-efficient than cars over the same distance. While the overall trend remains the same globally, the numbers vary for different regions.

High-speed trains in Europe need only one-third of the energy used by automotive travel. The Japanese high-speed rail industry is even more advanced – it uses only one-sixth of the energy.

2. Rail traffic is clean. Cargo transport on the road produces eight times moreCO2 emissions than freight trains. These numbers become even more clear when combining freight and passenger transport: railway companies only account for 1.3% of the total CO2 emissions in the transport sector, whereas aviation makes up 12.4%, ships 12.7% and road transport amounts to 72.2% of the emissions.

Image: Source: European Environment Agency, 2015

3. Rail is safer than other means of passenger transportation. The US Department of Transportation reports that, in 2010, the number of people injured on the highway was 304 times higher than the number of casualties in railroad accidents.

In Europe, where the predominance of car travel isn’t as pronounced as it is in North America, the numbers still show a clear trend: fifteen times as many peoplewere fatally wounded in car accidents in 2013 than in railway-related accidents.

4. Rail is already on the rise. The total length of high-speed railway lines in Central Europe has increased 16-fold since 1981 and the expansion of the European rail network is still ongoing.

In general, worldwide passenger transport by train has doubled since 1985. People seem to like taking the train and they won’t stop anytime soon.

Image: Source: UIC, 2015

What are the challenges that lie ahead and how can we tackle them?

Low network capacities and traffic bottlenecks on busy routes are among the main factors that are holding back progress in the rail industry. If we can’t figure out how to bring even more passengers and trains on railway tracks, and how to make sure that these trains arrive on time, then rail won’t be part of the “future of mobility”. The rail industry needs to adopt new technologies and operational processes in order to keep up.

IoT technologies and AI have the potential to enable this change ­– and in some areas it has already begun. Smart infrastructure components and autonomous trains will soon be interconnected and able to communicate with each other.

This machine-to-machine communication supports the efficiency of train services. It also means that smart sensors can transmit field data to the right platforms as efficiently as possible, that machine data can be used for more than just operation protocols, and that data from very diverse sources can easily be aggregated.

If train network operators combine these smart devices with machine learning algorithms, they can optimize their routes in real-time and distribute traffic more evenly.

Bottlenecks and maintenance

A very common reason for temporary traffic bottlenecks are unplanned maintenance actions. Railway lines are closed completely or speed restrictions are put into place until the damage to the infrastructure can be fixed. Even though this problem has been around for as long as rail travel exists, it does not mean we have to accept it as inevitable.

Rail companies have already started to install smart sensors in their trains and infrastructure, so that they can react faster when problems arise.

Technologies, which use both this so-called “condition-monitoring” and AI, go one step further.

These solutions not only monitor the current health of rail infrastructure, but can also predict wear and potential failures in advance, and so enable rail companies to plan their maintenance in time and prevent train delays.

Route optimization in real-time, and fewer train delays caused by unplanned maintenance would not only reduce operational costs for rail companies, but make rail travel more appealing to passengers.

Add this to helpful IoT applications for the modern traveller, such as interactive maps of train stations, mobile tickets or journey planning apps, and railway will become part of the future of transportation – especially for long-distance travel.

Thought leadership in social sector robotics

WeRobotics Global has become a premier forum for social good robotics. The feedback featured below was unsolicited. On June 1, 2017, we convened our first, annual global event, bringing together 34 organizations to New York City (full list below) to shape the global agenda and future use of robotics in the social good sector.  WeRobotics Global was kindly hosted by the Rockefeller Foundation, the first donor to support our efforts. They opened the event with welcome remarks and turned it over to Patrick Meier from WeRobotics who provided an overview of WeRobotics and the big picture context for social sector robotics.

I’ve been to countless remote sensing conferences over the past 30 years but WeRobotics Global absolutely ranks as the best event I’ve been to – Remote Sensing Expert

The event was really mind-blowing. I’ve participated in many workshops over the past 20 years. WeR Global was by far the most insightful and practical. It is also amazing how closely together everyone is working — irrespective of who is working where (NGO, UN, private sector, donor). I’ve never seen such a group of people come together this away. – Humanitarian Professional

WeRobotics Global is completely different to any development meeting or workshop I’ve been to in recent years. The discussions flowed seamlessly between real world challenges, genuine bottom-up approaches and appropriate technology solutions. Conversations were always practical and strikingly transparent. This was a highly unusual event. – International Donor

The first panel featured our Flying Labs Coordinators from Tanzania (Yussuf), Peru (Juan) and Nepal (Uttam). Each shared the hard work they’ve been doing over the past 6-10 months on localizing and applying robotics solutions. Yussuf spoke about the lab’s use of aerial robotics for disaster damage assessment following the earthquake in Bukoba and for coastal monitoring, environmental monitoring and forestry management. He emphasized the importance of community engagement and closed with new projects that Tanzania Flying Labs is working on such as mangrove monitoring for the Department of Forestry. Juan presented the work of the labs in the Amazon Rainforest, which is a joint effort with the Peruvian Ministry of Health. Together, they are field-testing the use of affordable and locally repairable flying robots for the delivery of antivenom and other medical payload between local clinics and remote villages. Juan noted that Peru Flying Labs is gearing up to carry out a record number of flight tests this summer using a larger and more diverse fleet of flying robots. Last but not least, Uttam showed how Nepal Flying Labs has been using flying robots for agriculture monitoring, damage assessment and mapping of property rights. He also gave an overview of the social entrepreneurship training and business plan competition recently organized by Nepal Flying Labs. This business incubation training has resulted in the launch of 4 new Nepali start-up companies focused on Robotics-as-a-Service. 

The following images provide highlights from each of our Flying Labs: Tanzania, Peru and Nepal.

The second panel featured talks on sector based solutions starting with the International Federation of the Red Cross (IFRC). The Federation (Aarathi) spoke about their joint project with WeRobotics; looking at cross-sectoral needs for various robotics solutions in the South Pacific. IFRC is exploring at the possibility of launching a South Pacific Flying Labs with a strong focus on women and girls. Pix4D (Lorenzo) addressed the role of aerial robotics in agriculture, giving concrete examples of successful applications while providing guidance to our Flying Labs Coordinators. The Wall Street Journal (Sally) spoke about the use of aerial robotics in news gathering and investigative journalism. She specifically emphasized the importance of using flying robots for storytelling. Duke Marine Labs (David) closed the panel with an overview of their projects in nature conservation and marine life protection, highlighting their use of machine learning for automated feature detection for real-time analysis.

Panel number three addressed the transformation of transportation. UNICEF (Judith) highlighted the field tests they have been carrying out in Malawi; using cargo robotics to transport HIV samples in order to accelerate HIV testing and thus treatment. UNICEF has also launched an air corridor in Malawi to enable further field-testing of flying robots. MSF (Oriol) shared their approach to cargo delivery using aerial robotics. They shared examples from Papua New Guinea (PNG) and emphasized the importance of localizing appropriate robotics solutions that can be maintained locally. MSF also called for the launch of PNG Flying Labs. IAEA was unable to attend WeR Global, so Patrick and Adam from WeRobotics gave the talk instead. WeRobotics is teaming up with IAEA to design and test a release mechanism for sterilized mosquitos in order to reduce the incidence of Zika and other mosquito-borne illnesses. More here. Finally, Llamasoft (Sid) closed the panel with a strong emphasis on the need to collect and share structured data to accurately carry out comparative cost-benefit-analyses of cargo delivery via flying robots versus conventional means. Sid used the analogy of self-driving cars to highlight how problematic the current lack of data vis-a-vis reliably evaluating the impact of cargo robotics.

The fourth and final panel went beyond aerial robotics. Digger (Thomas) showed how they convert heavy construction vehicles into semi-autonomous platforms to clear landmines and debris in conflict zones like Iraq and Syria. Science in the Wild (Ulyana) was alas unable to attend the event, so Patrick from WeRobotics gave the talk instead. This focused on the use of swimming robots to monitor glacial lakes in the Himalaya. The purpose of the effort is to identify cracks in the lake floors before they trigger what local villagers call the tsunamis of the Himalaya. OpenROV (David) gave a talk on the use of diving robots, sharing real-world examples and providing exciting updates on the new Trident diving robot. Planet Labs (Andrew) gave the closing talk, highlighting how space robotics (satellites) are being used across a wide range of social good projects. He emphasized the importance of integrating both aerial and satellite imagery to support social good projects.

The final session at WeR Global comprised breakout groups to identify next steps for WeRobotics and the social good sector more broadly. Many quality insights and recommendations were shared during the report back. One such recommendation was to hold WeR Global again, and sooner rather than later. So we look forward to organizing WeRobotics Global 2018. We will be providing updates via our blog and email list. We will also use our blog and email list to share select videos of the individual talks from Global 2017 along with their respective slide decks.

In the meantime, a big thanks to all participants and speakers for making Global 2017 such an unforgettable event. And sincerest thanks to the Rockefeller Foundation for hosting us at their headquarters in New York City.

Page 341 of 348
1 339 340 341 342 343 348