Page 341 of 348
1 339 340 341 342 343 348

Survey: Examining perceptions of autonomous vehicles using hypothetical scenarios

Driverless car merging into traffic. How big of a gap between vehicles is acceptable? Image credit: Jordan Collver

I’m examining the perception of autonomous cars using hypothetical scenarios. Each of the hypothetical scenarios is accompanied with an image to help illustrate the scene — using grey tones and nondescript human-like features — along with the option to listen to the question spoken out loud to fully visualise an association. 

If you live in the UK, you can take this survey and help contribute to my research!

Public perception has the potential to impact on the timescale and adoption of autonomous vehicles (AV). As the development of the technology advances, understanding attitudes and wider public acceptability is critical. It’s no longer a question of if, but when we will transition. Long range autonomous vehicles are expected between 2020 and 2025, with some estimates suggesting fully autonomous vehicles will take over by 2030. Currently, most modern cars are sold with automated features: automatic braking, autonomous parking, advanced lane assist, advanced cruise control, queue assist, for example. Adopting fully AV has the potential to improve significant societal aspects: efficient road safety, reducing pollution and congestion, and providing another type of transportation for the mobility impaired.

The project’s aim is to add to the conversation about public perception of AV. Survey experiments can be extremely useful tools for studying public attitudes, especially if researchers are fascinated by the “effects of describing or presenting a scenario in a particular way.”  This unusual and creative method may provide a model for other types of research surveys in the future where it’s difficult to visualise future technologies. An online survey was chosen to remove small sample bias and maximise responses by participants in the UK.

You can take this survey by clicking above, or alternatively, click the following link:

https://uwe.onlinesurveys.ac.uk/visualise-this

CARNAC program researching autonomous co-piloting

Credit: Aurora Flight Sciences.

DARPA, the Defense Advanced Research Projects Agency, is researching autonomous co-piloting so they can fly without a human pilot on board. The robotic system — called the Common Aircraft Retrofit for Novel Autonomous Control (CARNAC) (not to be confused with the old Johnny Carson Carnac routine) — has the potential to reduce costs, enable new missions, and improve performance.

CARNAC, the Johnny Carson version.

Unmanned aircraft are generally built from scratch with robotic systems integrated from the earliest design stages. Existing aircraft require extensive modification to add robotic systems.

RE2, the CMU spin-off located in Pittsburgh, makes mobile manipulators for defense and space. They just received an SBIR loan backed by a US Air Force development contract to develop a retrofit kit that would provide a robotic piloting solution for legacy aircraft.

“Our team is excited to incorporate the Company’s robotic manipulation expertise with proven technologies in applique systems, vision processing algorithms, and decision making to create a customized application that will allow a wide variety of existing aircraft to be outfitted with a robotic pilot,” stated Jorgen Pedersen, president and CEO of RE2 Robotics. “By creating a drop-in robotic pilot, we have the ability to insert autonomy into and expand the capabilities of not only traditionally manned air vehicles, but ground and underwater vehicles as well. This application will open up a whole new market for our mobile robotic manipulator systems.”

Aurora Flight Sciences, a Manassas, VA developer of advanced unmanned systems and aerospace vehicles, is working on another similar DARPA project, Aircrew Labor In-Cockpit Automation System (ALIAS), and is designed as a drop-in avionics and mechanics package that can be quickly and cheaply fitted to a wide variety of fixed and rotor aircraft, from a Cessna to a B-52. Once installed, ALIAS is able to analyze the aircraft and adapt itself to the job of the second pilot.

Credit: Aurora Flight Sciences

Assistive robots compete in Bristol

The Bristol Robotics Laboratory (BRL) will host the first European- Commission funded European Robotics League (ERL) tournament for service robots to be held in the UK.

Two teams from the BRL and Birmingham will pitch their robots against each other in a series of events from 26 and 30 June.

Robots designed to support people with care-related tasks in the home will be put to the test in a simulated home test bed.

The assisted living robots of the two teams will face various challenges, including understanding natural speech and finding and retrieving objects for the user.

The robots will also have to greet visitors at the door appropriately, such as welcoming a doctor on their visit, or turning away unwanted visitors.

Associate Professor Praminda Caleb-Solly, Theme Leader for Assistive Robotics at the BRL said, “The lessons learned during the competition will contribute to how robots in the future help people, such as those with ageing-related impairments and those with other disabilities, live independently in their own homes for as long as possible.

“This is particularly significant with the growing shortage of carers available to provide support for an ageing populations.”

The BRL, the host of the UK’s first ERL Service Robots tournament, is a joint initiative of the University of the West of England and the University of Bristol. The many research areas include swarm robotics, unmanned aerial vehicles, driverless cars, medical robotics and robotic sensing for touch and vision. BRL’s assisted living research group is developing interactive assistive robots as part of an ambient smart home ecosystem to support independent living.

The ERL Service Robots tournament will be held in the BRL’s Anchor Robotics Personalised Assisted Living Studio, which was set up to develop, test and evaluate assistive robotic and other technologies in a realistic home environment.

The studio was recently certified as a test bed by the ERL, which runs alongside similar competitions for industrial robots and for emergency robots, which includes vehicles that can search for and rescue people in disaster-response scenarios.

The two teams in the Bristol event will be Birmingham Autonomous Robotics Club (BARC) led by Sean Bastable from the School of Computer Science at the University of Birmingham, and the Healthcare Engineering and Assistive Robotics Technology and Services (HEARTS) team from the BRL led by PhD Student Zeke Steer.

BARC has developed its own robotics platform, Dora, and HEARTS will use a TIAGo Steel robot from PAL Robotics with a mix of bespoke and proprietary software.

The Bristol event will be open for public viewing in the BRL on the afternoon of the 29th of June 2017 (Bookable via EventBrite), and include short tours of the assisted living studio for the attendees. It will be held during UK Robotics Week, on 24-30 June 2017, when there will be a nationwide programme of robotics and automation events.

The BRL will also be organising focus groups on 28 and 29 June 2017 (Bookable via EventBrite and here) as part of the UK Robotics Week, to demonstrate assistive robots and their functionality, and seek the views of carers and older adults on these assistive technologies, exploring further applications and integration of such robots into care scenarios.

The European Commission-funded European Robotics League (ERL) is the successor to the RoCKIn, euRathlon and EuRoC robotics competitions, all funded by the EU and designed to foster scientific progress and innovation in cognitive systems and robotics. The ERL is funded by the European Union’s Horizon 2020 research and innovation programme. See: https://www.eu-robotics.net/robotics_league/

The ERL is part of the SPARC public-private partnership set up by the European Commission and the euRobotics association to extend Europe’s leadership in civilian robotics. SPARC’s €700 million of funding from the Commission in 2014̶20 is being combined with €1.4 billion of funding from European industry. See: http://www.eu-robotics.net/sparc

euRobotics is a European Commission-funded non-profit organisation which promotes robotics research and innovation for the benefit of Europe’s economy and society. It is based in Brussels and has more than 250 member organisations. See: www.eu-robotics.net

Robots Podcast #237: Deep Learning in Robotics, with Sergey Levine

In this episode, Audrow Nash interviews Sergey Levine, assistant professor at UC Berkeley, about deep learning on robotics. Levine explains what deep learning is and he discusses the challenges of using deep learning in robotics. Lastly, Levine speaks about his collaboration with Google and some of the surprising behavior that emerged from his deep learning approach (how the system grasps soft objects).

In addition to the main interview, Audrow interviewed Levine about his professional path. They spoke about what questions motivate him, why his PhD experience was different to what he had expected, the value of self-directed learning,  work-life balance, and what he wishes he’d known in graduate school.

A video of Levine’s work in collaboration with Google.

 

Sergey Levine

Sergey Levine is an assistant professor at UC Berkeley. His research focuses on robotics and machine learning. In his PhD thesis, he developed a novel guided policy search algorithm for learning complex neural network control policies, which was later applied to enable a range of robotic tasks, including end-to-end training of policies for perception and control. He has also developed algorithms for learning from demonstration, inverse reinforcement learning, efficient training of stochastic neural networks, computer vision, and data-driven character animation.

 

 

Links

More efficient and safer: How drones are changing the workplace

Photo credit: Pierre-Yves Guernier

Technology-driven automation plays a critical role in the global economy, and its visibility in our lives is growing. As technology impacts more and more jobs, individuals and enterprises find themselves wondering what effect the current wave of automation will have on their future economic prospects.

Advances in robotics and AI have led to modern commercial drone technology, which is changing the fundamental way enterprises interact with the world. Drones bridge the physical and digital worlds. They enable companies to combine the power of scalable computing resources with pervasive, affordable sensors that can go anywhere. This creates an environment in which businesses can make quick, accurate decisions based on enormous datasets derived from the physical world.

Removing dangers

For individuals in jobs that involve lots of time spent traveling to the extremities of where enterprises do business, or to a precarious perch to get a good view, like infrastructure inspection or site management, an opportunity presents itself.

Historically, it’s been a dangerous job to identify the state of affairs in the physical world and analyze and report on that information. It may have required climbing on tall buildings or unstable areas, or travelling to far-flung sites to inspect critical infrastructure, like live power lines or extensive dams.

Commercial drones, as part of the current wave of automation technology, will fundamentally change this process. The jobs involved aren’t going away, but they are going to change.

A January 2017 study by McKinsey on Automation, Employment, and Productivityreported that less than 5% of all occupations can be automated entirely using demonstrated technologies, but two-thirds of all jobs could have 30% of their work automated. Many jobs will not only be more efficient, they are going to be safer, and the skills required are going to be more mental than physical.

New ways to amass data

Jobs that were once considered gruelling and monotonous will look more like knowledge-worker jobs in the near future. Until now, people in these jobs have had to go to great lengths to collect data for analysis and decision-making. That data can now be collected without putting people in harm’s way. Without the need to don a harness, or climb to dangerous heights, people in these jobs can extend their career.

We’ve seen this firsthand in our own work conducting commercial drone operation training for many of the largest insurers in America, whose teams typically include adjusters in the latter stages of their career.

When you’re 50 years old, the physical demands of climbing on roofs to conduct inspections can make you think about an early retirement, or a career change.

Keeping hard-earned skills in the workplace

But these workers are some of the best in the business, with decades of experience. No one wants to leave hard-earned skills behind due to physical limitations.

We’ve found industry veterans like these to be some of the most enthusiastic adopters of commercial drones for rooftop inspections. After one week-long session, these adjusters could operate a commercial drone to collect rooftop data without requiring any climbing. Their deep understanding of claims adjustment can be brought to bear in the field without the conventional physical demands.

Specialists with knowledge and experience like veteran insurance adjusters are far harder to find than someone who can learn how to use a commercial drone system. Removing the need to physically collect the data means the impact of their expertise can be global, and the talent competition for these roles will be global as well.

Digital skills grow in importance

Workers can come out on top in this shift by focusing on improving relevant digital skills. Their conventional daily-use manual tools will become far less important than those tools that enable them to have an impact digitally.

The tape measure and ladder will go by the wayside as more work is conducted with iPads and cloud software. This transition will also create many more opportunities to do work that simply doesn’t get accomplished today.

Take commercial building inspection as an example.

In the past, the value of a building inspection had to be balanced against many drawbacks, like the cost of stopping business so an inspection could be conducted, the liability of sending a worker to a roof, and the sheer size of sites.

Filling the data gap

The result is a significant data gap. The state of the majority of commercial buildings is simply unknown to their owners and underwriters.

Using drones for inspections dramatically reduces the inherent challenges of data collection, which makes it feasible to inspect far more buildings and creates a demand for human workers to analyze this new dataset. Filling this demand requires specialized knowledge and a niche skillset that the existing workers in this field, like the veterans from our training groups who were on the verge of leaving the field, are best-poised to provide.

This trend is happening in myriad industries, from insurance, to telecoms, to mining and construction.

Preparation now

Enterprises in industries that will be impacted by this technology need to make their preparations for this transformation now. Those that do not, will not be around in 10 years.

Workers in jobs where careers are typically cut short due to physical risk need to invest in learning digital skills, so that they can extend the length of their career and increase their value, while reducing the inherent physical toll. Individuals who see their employers falling behind in innovation have the freedom to pursue a career with a more ambitious competitor, or to take a leadership role kickstarting initiatives internally to keep pace.

There’s no shortage of challenges to tackle or problems to solve in the world.

Commercial drones, and the greater wave of automation technology, will enable us to address more of them. This will create many opportunities for the workers who are prepared to capitalize on this technology. That preparation must begin now.

Helping or hacking? Engineers and ethicists must work together on brain-computer interface technology

File 20170609 4841 73vkw2
A subject plays a computer game as part of a neural security experiment at the University of Washington.
Patrick Bennett, CC BY-ND

By Eran Klein, University of Washington and Katherine Pratt, University of Washington

 

In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.

Recent announcements by Elon Musk and Facebook about brain-computer interface (BCI) technology are just the latest headlines in an ongoing science-fiction-becomes-reality story.

BCIs use brain signals to control objects in the outside world. They’re a potentially world-changing innovation – imagine being paralyzed but able to “reach” for something with a prosthetic arm just by thinking about it. But the revolutionary technology also raises concerns. Here at the University of Washington’s Center for Sensorimotor Neural Engineering (CSNE) we and our colleagues are researching BCI technology – and a crucial part of that includes working on issues such as neuroethics and neural security. Ethicists and engineers are working together to understand and quantify risks and develop ways to protect the public now.

Picking up on P300 signals

All BCI technology relies on being able to collect information from a brain that a device can then use or act on in some way. There are numerous places from which signals can be recorded, as well as infinite ways the data can be analyzed, so there are many possibilities for how a BCI can be used.

Some BCI researchers zero in on one particular kind of regularly occurring brain signal that alerts us to important changes in our environment. Neuroscientists call these signals “event-related potentials.” In the lab, they help us identify a reaction to a stimulus.

Examples of event-related potentials (ERPs), electrical signals produced by the brain in response to a stimulus. Tamara Bonaci, CC BY-ND

In particular, we capitalize on one of these specific signals, called the P300. It’s a positive peak of electricity that occurs toward the back of the head about 300 milliseconds after the stimulus is shown. The P300 alerts the rest of your brain to an “oddball” that stands out from the rest of what’s around you.

For example, you don’t stop and stare at each person’s face when you’re searching for your friend at the park. Instead, if we were recording your brain signals as you scanned the crowd, there would be a detectable P300 response when you saw someone who could be your friend. The P300 carries an unconscious message alerting you to something important that deserves attention. These signals are part of a still unknown brain pathway that aids in detection and focusing attention.

Reading your mind using P300s

P300s reliably occur any time you notice something rare or disjointed, like when you find the shirt you were looking for in your closet or your car in a parking lot. Researchers can use the P300 in an experimental setting to determine what is important or relevant to you. That’s led to the creation of devices like spellers that allow paralyzed individuals to type using their thoughts, one character at a time.

It also can be used to determine what you know, in what’s called a “guilty knowledge test.” In the lab, subjects are asked to choose an item to “steal” or hide, and are then shown many images repeatedly of both unrelated and related items. For instance, subjects choose between a watch and a necklace, and are then shown typical items from a jewelry box; a P300 appears when the subject is presented with the image of the item he took.

Everyone’s P300 is unique. In order to know what they’re looking for, researchers need “training” data. These are previously obtained brain signal recordings that researchers are confident contain P300s; they’re then used to calibrate the system. Since the test measures an unconscious neural signal that you don’t even know you have, can you fool it? Maybe, if you know that you’re being probed and what the stimuli are.

Techniques like these are still considered unreliable and unproven, and thus U.S. courts have resisted admitting P300 data as evidence.

For now, most BCI technology relies on somewhat cumbersome EEG hardware that is definitely not stealth. Mark Stone, University of Washington, CC BY-ND

Imagine that instead of using a P300 signal to solve the mystery of a “stolen” item in the lab, someone used this technology to extract information about what month you were born or which bank you use – without your telling them. Our research group has collected data suggesting this is possible. Just using an individual’s brain activity – specifically, their P300 response – we could determine a subject’s preferences for things like favorite coffee brand or favorite sports.

But we could do it only when subject-specific training data were available. What if we could figure out someone’s preferences without previous knowledge of their brain signal patterns? Without the need for training, users could simply put on a device and go, skipping the step of loading a personal training profile or spending time in calibration. Research on trained and untrained devices is the subject of continuing experiments at the University of Washington and elsewhere.

It’s when the technology is able to “read” someone’s mind who isn’t actively cooperating that ethical issues become particularly pressing. After all, we willingly trade bits of our privacy all the time – when we open our mouths to have conversations or use GPS devices that allow companies to collect data about us. But in these cases we consent to sharing what’s in our minds. The difference with next-generation P300 technology under development is that the protection consent gives us may get bypassed altogether.

What if it’s possible to decode what you’re thinking or planning without you even knowing? Will you feel violated? Will you feel a loss of control? Privacy implications may be wide-ranging. Maybe advertisers could know your preferred brands and send you personalized ads – which may be convenient or creepy. Or maybe malicious entities could determine where you bank and your account’s PIN – which would be alarming.

With great power comes great responsibility

The potential ability to determine individuals’ preferences and personal information using their own brain signals has spawned a number of difficult but pressing questions: Should we be able to keep our neural signals private? That is, should neural security be a human right? How do we adequately protect and store all the neural data being recorded for research, and soon for leisure? How do consumers know if any protective or anonymization measures are being made with their neural data? As of now, neural data collected for commercial uses are not subject to the same legal protections covering biomedical research or health care. Should neural data be treated differently?

Neuroethicists from the UW Philosophy department discuss issues related to neural implants.
Mark Stone, University of Washington, CC BY-ND

These are the kinds of conundrums that are best addressed by neural engineers and ethicists working together. Putting ethicists in labs alongside engineers – as we have done at the CSNE – is one way to ensure that privacy and security risks of neurotechnology, as well as other ethically important issues, are an active part of the research process instead of an afterthought. For instance, Tim Brown, an ethicist at the CSNE, is “housed” within a neural engineering research lab, allowing him to have daily conversations with researchers about ethical concerns. He’s also easily able to interact with – and, in fact, interview – research subjects about their ethical concerns about brain research.

There are important ethical and legal lessons to be drawn about technology and privacy from other areas, such as genetics and neuromarketing. But there seems to be something important and different about reading neural data. They’re more intimately connected to the mind and who we take ourselves to be. As such, ethical issues raised by BCI demand special attention.

Working on ethics while tech’s in its infancy

As we wrestle with how to address these privacy and security issues, there are two features of current P300 technology that will buy us time.

First, most commercial devices available use dry electrodes, which rely solely on skin contact to conduct electrical signals. This technology is prone to a low signal-to-noise ratio, meaning that we can extract only relatively basic forms of information from users. The brain signals we record are known to be highly variable (even for the same person) due to things like electrode movement and the constantly changing nature of brain signals themselves. Second, electrodes are not always in ideal locations to record.

All together, this inherent lack of reliability means that BCI devices are not nearly as ubiquitous today as they may be in the future. As electrode hardware and signal processing continue to improve, it will be easier to continuously use devices like these, and make it easier to extract personal information from an unknowing individual as well. The safest advice would be to not use these devices at all.

The ConversationThe goal should be that the ethical standards and the technology will mature together to ensure future BCI users are confident their privacy is being protected as they use these kinds of devices. It’s a rare opportunity for scientists, engineers, ethicists and eventually regulators to work together to create even better products than were originally dreamed of in science fiction.

Shrinking data for surgical training

Image: MIT News

Laparoscopy is a surgical technique in which a fiber-optic camera is inserted into a patient’s abdominal cavity to provide a video feed that guides the surgeon through a minimally invasive procedure. Laparoscopic surgeries can take hours, and the video generated by the camera — the laparoscope — is often recorded. Those recordings contain a wealth of information that could be useful for training both medical providers and computer systems that would aid with surgery, but because reviewing them is so time consuming, they mostly sit idle.

Researchers at MIT and Massachusetts General Hospital hope to change that, with a new system that can efficiently search through hundreds of hours of video for events and visual features that correspond to a few training examples.

In work they presented at the International Conference on Robotics and Automation this month, the researchers trained their system to recognize different stages of an operation, such as biopsy, tissue removal, stapling, and wound cleansing.

But the system could be applied to any analytical question that doctors deem worthwhile. It could, for instance, be trained to predict when particular medical instruments — such as additional staple cartridges — should be prepared for the surgeon’s use, or it could sound an alert if a surgeon encounters rare, aberrant anatomy.

“Surgeons are thrilled by all the features that our work enables,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and senior author on the paper. “They are thrilled to have the surgical tapes automatically segmented and indexed, because now those tapes can be used for training. If we want to learn about phase two of a surgery, we know exactly where to go to look for that segment. We don’t have to watch every minute before that. The other thing that is extraordinarily exciting to the surgeons is that in the future, we should be able to monitor the progression of the operation in real-time.”

Joining Rus on the paper are first author Mikhail Volkov, who was a postdoc in Rus’ group when the work was done and is now a quantitative analyst at SMBC Nikko Securities in Tokyo; Guy Rosman, another postdoc in Rus’ group; and Daniel Hashimoto and Ozanan Meireles of Massachusetts General Hospital (MGH).

Representative frames

The new paper builds on previous work from Rus’ group on “coresets,” or subsets of much larger data sets that preserve their salient statistical characteristics. In the past, Rus’ group has used coresets to perform tasks such as deducing the topics of Wikipedia articles or recording the routes traversed by GPS-connected cars.

In this case, the coreset consists of a couple hundred or so short segments of video — just a few frames each. Each segment is selected because it offers a good approximation of the dozens or even hundreds of frames surrounding it. The coreset thus winnows a video file down to only about one-tenth its initial size, while still preserving most of its vital information.

For this research, MGH surgeons identified seven distinct stages in a procedure for removing part of the stomach, and the researchers tagged the beginnings of each stage in eight laparoscopic videos. Those videos were used to train a machine-learning system, which was in turn applied to the coresets of four laparoscopic videos it hadn’t previously seen. For each short video snippet in the coresets, the system was able to assign it to the correct stage of surgery with 93 percent accuracy.

“We wanted to see how this system works for relatively small training sets,” Rosman explains. “If you’re in a specific hospital, and you’re interested in a specific surgery type, or even more important, a specific variant of a surgery — all the surgeries where this or that happened — you may not have a lot of examples.”

Selection criteria

The general procedure that the researchers used to extract the coresets is one they’ve previously described, but coreset selection always hinges on specific properties of the data it’s being applied to. The data included in the coreset — here, frames of video — must approximate the data being left out, and the degree of approximation is measured differently for different types of data.

Machine learning can be thought of as a problem of approximation, however. In this case, the system had to learn to identify similarities between frames of video in separate laparoscopic feeds that denoted the same phases of a surgical procedure. The metric of similarity that it arrived at also served to assess the similarity of video frames that were included in the coreset, to those that were omitted.

“Interventional medicine — surgery in particular — really comes down to human performance in many ways,” says Gregory Hager, a professor of computer science at Johns Hopkins University who investigates medical applications of computer and robotic technologies. “As in many other areas of human endeavor, like sports, the quality of the human performance determines the quality of the outcome that you achieve, but we don’t know a lot about, if you will, the analytics of what creates a good surgeon. Work like what Daniela is doing and our work really goes to the question of: Can we start to quantify what the process in surgery is, and then within that process, can we develop measures where we can relate human performance to the quality of care that a patient receives?”

“Right now, efficiency” — of the kind provided by coresets — “is probably not that important, because we’re dealing with small numbers of these things,” Hager adds. “But you could imagine that, if you started to record every surgery that’s performed — we’re talking tens of millions of procedures in the U.S. alone — now it starts to be interesting to think about efficiency.”

RoboCup video series: Junior league

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.

To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.

In our final set of videos, we are featuring the RoboCupJunior league! RoboCupJunior is a project-oriented educational initiative that sponsors local, regional and international robotic events for young students. It is designed to introduce RoboCup to primary and secondary school children, as well as undergraduates who do not have the resources to get involved in the senior leagues yet.

Short version:

Long version:

You can view all the videos on the RoboCup playlist below:

https://www.youtube.com/playlist?list=PLEfaZULTeP_-bqFvCLBWnOvFAgkHTWbWC

Please spread the word! If you would like to join a team, click here for more information.

If you liked reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Robots offer the elderly a helping hand

Humanoid robots under development can be programmed to detect changes in an elderly person’s preferences and habits. Image credit: GrowMeUp

by Helen Massy-Beresford

Low birth rates and higher life expectancies mean that those over 65 years old now will account for 28.7 % of Europe’s population by 2080, according to Eurostat, the EU’s statistics arm.

It means the age-dependency ratio – the proportion of the elderly compared with the number of workers – will almost double from 28.8 % in 2015 to 51 % in 2080, straining healthcare systems and national budgets.

Yet there’s hope marching over the horizon, in the form of robots.

The creators of one humanoid robot under development for the elderly say it can understand people’s actions and learn new behaviours in response, even though it is devoid of arms.

Robots can be programmed to understand an elderly person’s preferences and habits to detect changes in behaviour: for example if a yoga devotee misses a class, it will ask why, while if an elderly person falls it will automatically alert caregivers or emergency services.

Yet there’s still a way to go before these devices will be able to bring out a tray of tea and biscuits when visitors drop by, according to its creator.

At the moment there are things the robot can perform perfectly in the lab but that still present challenges, says Dr Luís Santos from the University of Coimbra in Portugal, who has been developing the technology as part of an EU-funded research project known as GrowMeUp.

The proportion of elderly people is expected to almost double by 2080, so researchers are looking to robots to see if they can help care for the aging population. Image credit: GrowMeUp

‘There is a mismatch between what elderly people want and what science and technology can provide – some of them are expecting robots to do all types of household activities, engage them in everyday gossip or physically interact with them as another human would do,’ says Dr Santos.

The team is working on making the robot’s dialogue as natural and as intuitive as possible and improving its ability to safely navigate an older person’s home, using a low-cost laser and a camera, and a second prototype will be tested with elderly people in the coming months. Yet, Dr Santos foresees that these devices are still four to six years away from commercialisation, at least.

Revolution

He sees robotics as just a part of a wider revolution underway in how societies care for the elderly, with connectivity and augmented reality also playing a role.

‘In the future, elderly care will also be very focused on information and communications technologies – for example virtual access to doctors or care institutions and 24/7 monitoring in a non-invasive way are likely to become standard,’ he said.

Yet researchers believe that keeping the technology unobtrusive is key – no wearable devices or cumbersome cameras cluttering up people’s homes.

Dr Maria Dagioglou from the National Centre of Scientific Research ‘Demokritos’ in Greece, said: ‘We wanted to avoid a Big Brother scenario, so data privacy is important but also dignity.’

She is looking at ways to integrate robotics technology into a smart home equipped with connected devices, automation and sensors, as part of the EU-funded RADIO project.

Researchers are figuring out ways of putting robots in homes for virtual access to healthcare and constant monitoring, yet that are also non-invasive. Image credit: RADIO

Dr Stasinos Konstantopoulos, the scientific manager of the RADIO project, added: ‘All monitoring happens as the user interacts with the system to control the house, for example, to regulate the temperature, and to ask the robot to run errands, like finding misplaced items.’

Using a tablet or smartphone to interact, the equipment, which should only take a day to install, can monitor elements of an elderly person’s day-to-day life, efficiently processing and managing data to allow medical professionals to keep track of and assess their level of independence via smartphone notifications.

‘It’s a constant safety net in case something starts to be worrying,’ said Dr Dagioglou.

The goal of innovations like this is to allow people to live independently for longer.

A crucial element of this is finding ways for older people to keep up their activity levels, and this is an area where robots could really come into their own.

Dr Luigi Palopoli at the University of Trento in Italy said ‘our robot pushes them to do their exercise, to go out and about; it extracts information on their interests and on their fears and makes them part of a network.’

Barriers

‘We want to tear down the emotional barriers that make them stay at home and degrade the quality of their life,’ he said.

As part of the EU-funded Acanto project, he is developing a robot called FriWalk, following on from the progress made during a previous EU-funded project, the DALi project.

The team has worked hard to make the FriWalk robot look energetic and appealing and to ensure it offers useful services like carrying small items or giving directions.

With prototypes made, in the next few months, the researchers will start clinical trials in Spain as well as public demonstrations of the FriWalk in museums and other public spaces.

Researchers will start clinical trials in Spain of a robotic prototype designed to help elderly people to exercise. Image credit: Acanto

Further ahead, Dr Palopoli hopes for interest from established manufacturers and start-ups to bring the FriWalk technology to the market.

If you liked this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Notes and pics from Xponential in Dallas, Innorobo in Paris and ICRA in Singapore

Conferences and trade shows, held in interesting locations around the world, can be entertaining, informative and an opportunity to explore new places, meet new people and renew acquaintances. Three recent examples: Xponential, the mostly defense-related unmanned land, sea and air show, held in Dallas; Innorobo, focused on service robotics, in Paris; and ICRA, the IEEE’s premier robotics conference, in Singapore.

ICRA

The 2017 IEEE International Conference on Robotics and Automation (ICRA), the IEEE’s principle forum for robotics researchers to present their work, was held this year at the Marina Bay Sands Hotel and Convention Center in Singapore. ICRA continues to have the highest number of cited research papers in the robotics field of all the various global conferences (including IROS).

In an IEEE/Spectrum review of that portion of the conference that was biomedical-related, a swallowable capsule robot capable of needle aspiration, guided by magnets, and an autonomous snake-like colonoscopy robot were two of the hits. Another reviewer found the rehab exoskeletons, haptic interfaces, modular robot components and many of the ROS-enabled solutions of merit. Overall, almost 3,000 robotics researchers attended ICRA 2017 and most found many things of interest (including Singapore and the Marina Sands Hotel).

Xponential – all things unmanned

The Association for Unmanned Vehicle Systems Internation, AUVSI, annual trade show and conference, Xponential, held this year in Dallas, Texas, showed the changing nature of the industry and offered suggestions (guesses) as to where they were going. 170,000 visitors attended while 100 speakers and over 650 exhibitors put on this choreographed show of military weaponry, defense and security systems and equipment, and commercial unmanned air, land and sea systems.

Click to enlarge

AUVSI’s membership fees are discounted for members of the military and first responders and the exhibitor list continues to favor military/defense-related companies, but most of those companies now have a growing commercial component.

Autonomous vehicles have always been the constituency of AUVSI but with all the money flowing into autonomous car startups, and the talent search to corral people to make this new industry happen, a small portion of this Xponential show was devoted to the prospect of that future (see chart above).

The folks at The National Robotics Education Foundation (NREF) produced a gallery of over 300 interesting photos from Xponential. They also produced a special set of pictures of UAS engines from the show. Unmanned vehicles used by the military, for search and rescue, in support of agriculture and mining, for infrastructure inspection, and in a variety of other circumstances must stay aloft for long periods, hence the interest in engines that can support that amount of air time.

Innorobo

I visited Innorobo. It is a necessary show in a rapidly changing arena. Over 7,000 visitors perused an eclectic group of 170 startups, integrators, component manufacturers and service robot providers exhibiting a wide range of products and services at a site on the outskirts of Paris. Over the 3-day show, 50 speakers explored topics from robotics-related AI to philosophical discussions about law and ethics to the latest innovations in personal and professional service robotics.

The IFR (International Federation of Robotics) says that robot installations in France increased by 22% to 1,400 units in 2016 (compared to 700 units in the UK), particularly within the car industry. France ranks 2nd within the EU for robot density (the UK is 10th). Innorobo started as a show to promote France’s robotics industry (there are 225+ French companies in The Robot Report’s Directories and on our Global Map). Held in Lyon, the show grew to its present size through the hard work and willpower of a small group of inventive women entrepreneurs. It grew and relocated to Paris where it’s been for the last two years. As the focus expanded from promoting in-country robotics to displaying global innovations in robotics of the startup companies, research labs and service robot companies beginning to make inroads aroud the world – the show has become a valuable mainstay for the European press, investors, business executives, students, and roboticists alike.

Events, Directories and The Robot Report’s Global Map

From time to time it becomes relevant to toot the horn of the free resources available on The Robot Report. Our events calendar, directory of companies and educational institutions involved in the robotics industry, and the global map for job seekers and researchers alike are free and always updated.

There are still 28 robotics-related events remaining in 2017. Check them out:

It’s not just self-driving cars; trains could soon be autonomous too!

Judging by the frequency that self-driving cars are mentioned in scientific discussions and the media, they are not only the next big thing, but might actually take over as our main means of transportation. Traditional industries like the railways, on the other hand, seem to have lost that race already. But what if new technologies, such as Internet of Things (IoT) devices and Artificial Intelligence (AI), were not only used to create new transportation modes, but to transform old ones as well?

If we get this digitization right, then trains, as the winners of the first industrial revolution, could in fact be here to stay.

Long-distance

It is true that the technology behind autonomous cars has enormous potential and that they might emerge as the winners when it comes to shorter distances. But I believe that trains have a very real chance at becoming the transportation of choice for long-distance travel.

How exactly could digitization make this happen?

With the help of new digital processes, rail companies could increase the capacity of their networks and resolve traffic bottlenecks. This will, in turn, help more people reach their destinations sooner. The use of emerging technologies could also mean that the trains of the future will not only be more comfortable, but also more energy-efficient, safer and faster than cars over long distances.

Some pieces of this puzzle are already in place.

What is left for the rail industry is to identify the change still necessary to become ready for the future, and to accept IoT technologies and AI as its chief enablers.

What the rail industry already has going for it

1. Rail is energy-efficient. Government institutions examine the energy efficiency of different transportation means on a frequent basis. A recent US study, for instance, shows that high-speed trains are up to eight times more efficient than commercial planes, and four times more energy-efficient than cars over the same distance. While the overall trend remains the same globally, the numbers vary for different regions.

High-speed trains in Europe need only one-third of the energy used by automotive travel. The Japanese high-speed rail industry is even more advanced – it uses only one-sixth of the energy.

2. Rail traffic is clean. Cargo transport on the road produces eight times moreCO2 emissions than freight trains. These numbers become even more clear when combining freight and passenger transport: railway companies only account for 1.3% of the total CO2 emissions in the transport sector, whereas aviation makes up 12.4%, ships 12.7% and road transport amounts to 72.2% of the emissions.

Image: Source: European Environment Agency, 2015

3. Rail is safer than other means of passenger transportation. The US Department of Transportation reports that, in 2010, the number of people injured on the highway was 304 times higher than the number of casualties in railroad accidents.

In Europe, where the predominance of car travel isn’t as pronounced as it is in North America, the numbers still show a clear trend: fifteen times as many peoplewere fatally wounded in car accidents in 2013 than in railway-related accidents.

4. Rail is already on the rise. The total length of high-speed railway lines in Central Europe has increased 16-fold since 1981 and the expansion of the European rail network is still ongoing.

In general, worldwide passenger transport by train has doubled since 1985. People seem to like taking the train and they won’t stop anytime soon.

Image: Source: UIC, 2015

What are the challenges that lie ahead and how can we tackle them?

Low network capacities and traffic bottlenecks on busy routes are among the main factors that are holding back progress in the rail industry. If we can’t figure out how to bring even more passengers and trains on railway tracks, and how to make sure that these trains arrive on time, then rail won’t be part of the “future of mobility”. The rail industry needs to adopt new technologies and operational processes in order to keep up.

IoT technologies and AI have the potential to enable this change ­– and in some areas it has already begun. Smart infrastructure components and autonomous trains will soon be interconnected and able to communicate with each other.

This machine-to-machine communication supports the efficiency of train services. It also means that smart sensors can transmit field data to the right platforms as efficiently as possible, that machine data can be used for more than just operation protocols, and that data from very diverse sources can easily be aggregated.

If train network operators combine these smart devices with machine learning algorithms, they can optimize their routes in real-time and distribute traffic more evenly.

Bottlenecks and maintenance

A very common reason for temporary traffic bottlenecks are unplanned maintenance actions. Railway lines are closed completely or speed restrictions are put into place until the damage to the infrastructure can be fixed. Even though this problem has been around for as long as rail travel exists, it does not mean we have to accept it as inevitable.

Rail companies have already started to install smart sensors in their trains and infrastructure, so that they can react faster when problems arise.

Technologies, which use both this so-called “condition-monitoring” and AI, go one step further.

These solutions not only monitor the current health of rail infrastructure, but can also predict wear and potential failures in advance, and so enable rail companies to plan their maintenance in time and prevent train delays.

Route optimization in real-time, and fewer train delays caused by unplanned maintenance would not only reduce operational costs for rail companies, but make rail travel more appealing to passengers.

Add this to helpful IoT applications for the modern traveller, such as interactive maps of train stations, mobile tickets or journey planning apps, and railway will become part of the future of transportation – especially for long-distance travel.

Thought leadership in social sector robotics

WeRobotics Global has become a premier forum for social good robotics. The feedback featured below was unsolicited. On June 1, 2017, we convened our first, annual global event, bringing together 34 organizations to New York City (full list below) to shape the global agenda and future use of robotics in the social good sector.  WeRobotics Global was kindly hosted by the Rockefeller Foundation, the first donor to support our efforts. They opened the event with welcome remarks and turned it over to Patrick Meier from WeRobotics who provided an overview of WeRobotics and the big picture context for social sector robotics.

I’ve been to countless remote sensing conferences over the past 30 years but WeRobotics Global absolutely ranks as the best event I’ve been to – Remote Sensing Expert

The event was really mind-blowing. I’ve participated in many workshops over the past 20 years. WeR Global was by far the most insightful and practical. It is also amazing how closely together everyone is working — irrespective of who is working where (NGO, UN, private sector, donor). I’ve never seen such a group of people come together this away. – Humanitarian Professional

WeRobotics Global is completely different to any development meeting or workshop I’ve been to in recent years. The discussions flowed seamlessly between real world challenges, genuine bottom-up approaches and appropriate technology solutions. Conversations were always practical and strikingly transparent. This was a highly unusual event. – International Donor

The first panel featured our Flying Labs Coordinators from Tanzania (Yussuf), Peru (Juan) and Nepal (Uttam). Each shared the hard work they’ve been doing over the past 6-10 months on localizing and applying robotics solutions. Yussuf spoke about the lab’s use of aerial robotics for disaster damage assessment following the earthquake in Bukoba and for coastal monitoring, environmental monitoring and forestry management. He emphasized the importance of community engagement and closed with new projects that Tanzania Flying Labs is working on such as mangrove monitoring for the Department of Forestry. Juan presented the work of the labs in the Amazon Rainforest, which is a joint effort with the Peruvian Ministry of Health. Together, they are field-testing the use of affordable and locally repairable flying robots for the delivery of antivenom and other medical payload between local clinics and remote villages. Juan noted that Peru Flying Labs is gearing up to carry out a record number of flight tests this summer using a larger and more diverse fleet of flying robots. Last but not least, Uttam showed how Nepal Flying Labs has been using flying robots for agriculture monitoring, damage assessment and mapping of property rights. He also gave an overview of the social entrepreneurship training and business plan competition recently organized by Nepal Flying Labs. This business incubation training has resulted in the launch of 4 new Nepali start-up companies focused on Robotics-as-a-Service. 

The following images provide highlights from each of our Flying Labs: Tanzania, Peru and Nepal.

The second panel featured talks on sector based solutions starting with the International Federation of the Red Cross (IFRC). The Federation (Aarathi) spoke about their joint project with WeRobotics; looking at cross-sectoral needs for various robotics solutions in the South Pacific. IFRC is exploring at the possibility of launching a South Pacific Flying Labs with a strong focus on women and girls. Pix4D (Lorenzo) addressed the role of aerial robotics in agriculture, giving concrete examples of successful applications while providing guidance to our Flying Labs Coordinators. The Wall Street Journal (Sally) spoke about the use of aerial robotics in news gathering and investigative journalism. She specifically emphasized the importance of using flying robots for storytelling. Duke Marine Labs (David) closed the panel with an overview of their projects in nature conservation and marine life protection, highlighting their use of machine learning for automated feature detection for real-time analysis.

Panel number three addressed the transformation of transportation. UNICEF (Judith) highlighted the field tests they have been carrying out in Malawi; using cargo robotics to transport HIV samples in order to accelerate HIV testing and thus treatment. UNICEF has also launched an air corridor in Malawi to enable further field-testing of flying robots. MSF (Oriol) shared their approach to cargo delivery using aerial robotics. They shared examples from Papua New Guinea (PNG) and emphasized the importance of localizing appropriate robotics solutions that can be maintained locally. MSF also called for the launch of PNG Flying Labs. IAEA was unable to attend WeR Global, so Patrick and Adam from WeRobotics gave the talk instead. WeRobotics is teaming up with IAEA to design and test a release mechanism for sterilized mosquitos in order to reduce the incidence of Zika and other mosquito-borne illnesses. More here. Finally, Llamasoft (Sid) closed the panel with a strong emphasis on the need to collect and share structured data to accurately carry out comparative cost-benefit-analyses of cargo delivery via flying robots versus conventional means. Sid used the analogy of self-driving cars to highlight how problematic the current lack of data vis-a-vis reliably evaluating the impact of cargo robotics.

The fourth and final panel went beyond aerial robotics. Digger (Thomas) showed how they convert heavy construction vehicles into semi-autonomous platforms to clear landmines and debris in conflict zones like Iraq and Syria. Science in the Wild (Ulyana) was alas unable to attend the event, so Patrick from WeRobotics gave the talk instead. This focused on the use of swimming robots to monitor glacial lakes in the Himalaya. The purpose of the effort is to identify cracks in the lake floors before they trigger what local villagers call the tsunamis of the Himalaya. OpenROV (David) gave a talk on the use of diving robots, sharing real-world examples and providing exciting updates on the new Trident diving robot. Planet Labs (Andrew) gave the closing talk, highlighting how space robotics (satellites) are being used across a wide range of social good projects. He emphasized the importance of integrating both aerial and satellite imagery to support social good projects.

The final session at WeR Global comprised breakout groups to identify next steps for WeRobotics and the social good sector more broadly. Many quality insights and recommendations were shared during the report back. One such recommendation was to hold WeR Global again, and sooner rather than later. So we look forward to organizing WeRobotics Global 2018. We will be providing updates via our blog and email list. We will also use our blog and email list to share select videos of the individual talks from Global 2017 along with their respective slide decks.

In the meantime, a big thanks to all participants and speakers for making Global 2017 such an unforgettable event. And sincerest thanks to the Rockefeller Foundation for hosting us at their headquarters in New York City.

The Drone Center’s Weekly Roundup: 6/19/17

The Missile Defense Agency is seeking a high-altitude unmanned aircraft that can be equipped with a high-energy laser. Credit: MDA

June 12, 2017 – June 18, 2017

News

A U.S. drone strike in Yemen reportedly killed two individuals suspected of being members of al-Qaeda. The strike targeted a vehicle in Shabwa province, one of several strongholds of al-Qaeda in the Arabian Peninsula. (Reuters)

The European Union released draft regulations for consumer and commercial drones. The blueprint was assembled by the Single European Sky Air traffic management Research Joint Undertaking, a body set up by the European Commission to study low-altitude drone operations. The EU plans to implement drone regulations by 2019. (Press Release)

Commentary, Analysis, and Art

A report by the Center for a New American Security explores possible policies designed to manage military drone proliferation.

A report by the Columbia Law School’s Human Rights Clinic and the Sana’a Center for Strategic Studies evaluates the U.S. government’s transparency on drone strikes between 2002 and 2017.

At Fast Company, Steven Melendez considers how the next generation of military drones and autonomous systems could change warfare.

At the National Interest, David Axe writes that the U.S. Air Force is planning to invest more in disposable strike drones than large complex systems.

Also at the National Interest, Dan Goure looks at how U.S. companies are focusing on ending the threat posed by rogue drone use.

At the Washington Post, Thomas Gibbons-Neff writes that U.S. officials are concerned about the ability of ISIS drones to disrupt U.S. operations.

At Lawfare, Rebecca Crootof and Frauke Renz argue that the conversation surrounding lethal autonomous weapons should seek alternative regulatory strategies beyond an outright ban.

At TechCrunch, Brian Heater looks at how RE2 Robotics is making robot control mechanisms more intuitive.

At Inc.com, Will Yakowicz writes that Saildrone, a California-based startup, aims to deploy more unmanned sailboats to measure climate change than all of the satellites in space.

A study by the Karolinska Institute in Stockholm found that defibrillator-carrying drones could cut response times for cardiac arrests by 16 minutes. (The Guardian)

An essay at the Economist explores how consumer drones are being put to work for commercial applications.  

At iRevolutions.org, Patrick Meier looks at how flying robots could be used to combat the spread of Zika.

At the University of Toronto, students and faculty discuss the opportunities presented by the development of advanced autonomy for consumer drones.

At KSHB, Belinda Post writes that drones are a popular father’s day gift this year.

At AIN Online, Vladimir Karnozov considers the history of Iranian-Russian collaboration on the development of new drones.

Know Your Drone

The U.S. Missile Defense Agency is looking to acquire a high-altitude long-endurance drone that can carry a high-energy laser to intercept intercontinental ballistic missiles. (IHS Jane’s 360)

Drone maker RaptorUAS has unveiled the Raptor EV, a vertical takeoff and landing fixed-wing drone. (Unmanned Systems Technology)

U.S. drone maker Kratos is unveiling its two new low-cost combat drones, the XQ-222 Valkyrie and the UTAP-22 Mako. (New Atlas)

The U.S. Air Force is conducting a study to estimate the service life of its MQ-9 Reaper drones. (Defense Daily)

Airbus Defence and Space will conduct test flights of its Zephyr high-altitude long-endurance drone in Australia next year. (Shephard Media)

U.K. firm Horizon Technologies is looking to mount a satellite phone monitoring sensor on small drones. (IHS Jane’s 360)

Researchers at Nvidia are developing drone navigation systems that rely on computer vision rather than GPS signals. (The Drive)

Amazon has been awarded two patents for its delivery drone system: one for foldable rotor arms and the other for a winch system to lower packages from the drone to the ground. (GeekWire)

Drones at Work

South Korean officials said that a North Korean drone that crashed in the country had taken 10 photographs of a sensitive U.S. missile defense site. (The Washington Post)

A search and rescue team in Colorado used a newly acquired drone to search for a group of missing hikers. (CBS4)

General Electric has begun a program to test drones and unmanned ground vehicles to inspect industrial facilities and infrastructure. (Reuters)

The Croatian Defense Ministry is planning to acquire drones and set up an unmanned aircraft squadron. (Defense News)

Drone Delivery Canada conducted the first beyond visual line of sight delivery test flights in Canada, flying a drone in Alberta from a control center 2,500 kilometers away in Toronto. (AUVSI)

Documents released by the U.S. Department of Justice show that there have been more than a dozen attempts to smuggle contraband into federal prisons using drones in the past five years. (USA Today)

Firefighters in London used a drone to assist in the response to the Grenfell Tower fire. (Newsweek)

The Federal Aviation Administration is investigating reports of illegal drone operations in Charlotte, North Carolina. (Charlotte Observer)

The California National Guard has relocated its fleet of MQ-9 Reaper drones from Victorville to the March Air Reserve Base. (Aviation Week)  

Security officials used a Dedrone counter-drone system at the Golden State Warriors’ basketball team’s championship parade in Oakland, California. (Recode)

Massachusetts police used drones for security during the Sail Boston boating event. (Boston Globe)

The Middlesex County prosecutor’s office in New Jersey bought a drone for criminal investigations. (TAP into Piscataway)

The Israeli air force is developing a 15-year roadmap for its fleet of unmanned aircraft. (FlightGlobal)

Police are searching for a drone operator who flew a drone close to air tankers and helicopters assisting in the response to a California brush fire. (10News)

The U.S. Navy picked the USS Dwight D. Eisenhower and USS George H.W. Bush as the first two aircraft carriers to deploy the MQ-25 Stingray unmanned refueler drone. (USNI News)

Industry Intel

The U.S. Navy awarded Boeing Insitu an $8 million contract for one RQ-21A Blackjack unmanned aircraft system for the Marine Corps. (DoD)

The Defense Advanced Research Projects Agency awarded Raytheon a $5.2 million contract for the Aerial Dragnet program.

The Defense Advanced Research Projects Agency awarded Embry-Riddle and Creare a $1 million grant to develop an autonomous flight control system for drones. (Press Release)

The Drone Racing League raised $20 million in grants from Allianz and Sky. (The Telegraph)

Kraken is partnering with Atlas Elektronik to develop a system for the Royal Canadian Navy’s Remote Mine Disposal System requirement. (IHS Jane’s 360)

Laura Ponto, an executive at Alphabet’s Project Wing, is the new chairman of the Commercial Drone Alliance, an industry advocacy organization. (Recode)

Israel’s Aeronautics, a drone manufacturer, has made an initial public offering on the Tel Aviv Stock Exchange. (IHS Jane’s 360)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

 

New laissez-faire robocar rules may arise

While very few details have come out, Reuters reports that new proposed congressional bills on self-driving cars will reverse many of the provisions I critiqued in the NHTSA regulations last year.

One big change is a reversal of the new idea of pre-market regulation. Today, new car technologies are not regulated before they are deployed, but NHTSA proposed giving itself the power to regulate technologies even before they exist. Currently, most car technologies like adaptive cruise control, autopilots, forward collision avoidance, lane keeping and the like remain unregulated after a decade or more of deployment with few, if any, problems.

This is important because the old doctrine of “we don’t regulate until we see a problem the industry won’t fix on its own” is a much better one for innovation, and the speed of innovation is key in deciding which countries and companies lead this technology. The opposite approach of “we try to imagine what might go wrong and ban it ahead of time” may seem safer, but it’s definitely an impediment to innovation and may actually result in far more deaths through the delay of life-saving technologies.

Harder to judge is the preemption of state rules. While states are also attempting to pre-regulate, having a laboratory of 50 different competing states can also be good for innovation on the legal side. There is not one answer, and while it’s more complex to deal with 50 sets of regulations instead of one, it’s not that much more complex.

One of the few interesting and good ideas in the NHTSA regs may also vanish. NHTSA wanted all vendors to make available all sensor logs from all incidents. As I predicted, companies pushed back on this — their testing logs and the resulting test suites are very important competitive assets. The company with the best test suite is the furthest on the path to the safety needed for deployment. On the other hand, sharing this data would let everybody get further on that path, faster.

There has been lots of other news during the long road-trip I am on in Europe. This includes more entrants in the race, the retirement of Google’s 3rd generation “koala” car, more at Uber. I will report from the Autonomous Car Testing and Development conference in Stuttgart starting Tuesday.

Trusting robots with our lives

The Baxter robot hands off a cable to a human collaborator — an example of a co-robot in action. Photo credit: Aaron Bestick, UC Berkeley.

The key takeaway from Tuesday’s RobotLabNYC forum, on “Exploring The Autonomous Future,” was humans are the key to robot adoption. Dr. Howard Morgan of First Round Capital expressed to the audience of more than 100 innovators working within the automation ecosystem, the necessity of embracing “entrepreneurial marketing” to reach customers. Tom Ryden echoed Morgan’s sentiment in his presentation about Mass Robotics, conveying his startups’ frustrations with the pace of adoption. Dr. Eric Daimler, formerly of the Obama Administration, concluded the evening succinctly by exclaiming, “we only adopt what we trust.” Trust is key for crossing the chasm.

Intuitive Surgical this past year celebrated its 17 year of operations with close to a million robotic surgeries completed last year. According to a recent Gallup Poll, medical professionals are the most trusted individuals in our society, even more than one’s clergy. The fact that robot-assisted surgery has become so routine and accepted by doctors and their patients is proof positive that in some industries we have already crossed the trust threshold.

Photo credit: Robert Shields

Understanding how Intuitive’s Da Vinci robot built trust within the medical community could offer parallels to other areas of the automation industry. Robotic-assisted surgery or “telerobotics,” is the evolution of two modern technologies: 1) telepresence or telemanipulation; and 2) laparoscopic surgery. In 1987, French physician Dr. Philippe Mouret performed the first minimally invasive gallbladder surgery using an endoscope-like device to remotely guide his instruments via video to remove the damaged organ. By the 1990’s, laparoscopic surgery became commonplace, driving the demand for more precision through mechanics and computer-aided techniques. A decade later, Intuitive received FDA approval for its Da Vinci robot for general surgery, which has since been expanded for prostate, neurological, and thoracic procedures. Telerobotics evolved not just from the availability of advanced technologies, but from the demand for less invasive procedures by the most trusted people in America.

Last month, the FDA approved the Da Vinci Xi Systems, enabling Intuitive Surgical to market less expensive systems and gain marketshare with smaller medical institutions globally. “This new system enables access to Intuitive’s leading and proven robotic-assisted surgical technology at a lower price point. Customers around the globe have different needs from a clinical, cost and technology perspective; Intuitive’s goal is to meet those needs by providing a range of products and solutions: the da Vinci X System helps us continue to do so,” said CEO Dr. Gary Guthart.

According to the press release, Da Vinci X System is “a focused-quadrant surgery and features flexible port placement and 3D digital optics, while also including advanced instruments and accessories from its Xi system.” Another determining factor of Intuitive Surgical’s success is the interoperability of the instruments. Rather than just an endoscope that provides video feeds, Da Vinci is equipped with multiple end effectors that mimics traditional instruments guided by experienced surgeons telerobotically.

Patients trust the robot because it is simply augmenting their doctor’s skills with greater precision. This is reinforced by shorter recovery periods and better outcomes. Recently, Oxford published a research study which took place over a nine-year period that concluded patients who opted for robotic lobectomies had better lung cancer outcomes. As a new generation of surgeons embraces the robotic future, the market for abdominal surgical robots is expected to grow from $2.9 billion in 2017 to $12.9 billion by 2022.

Trusting robots with our bodies might seem like a difficult premise to uphold, but robots have been saving lives on the front lines since 1972. Bomb-defusing machines have been utilized in the most dangerous situations worldwide from Afghanistan to Jerusalem to New York City. Today, almost every police department and military have an arsenal of remote-controlled explosive removal devices.

Dr. Sethu Vijayakumar, director of the Edinburgh Centre for Robotics in the United Kingdom, explains, “One of the target areas, in terms of [the] use of robots, is for going into dangerous situations. Robots can go in, be operated from a safe distance, and, in a worst-case scenario, be sacrificed.”

Similar to robotic medicine, trust-based systems for the military are built by teleporting human expertise into dangerous situations. Pittsburgh-based RE2 Robotics took this concept to a new level with its Robotic Manipulation System announced last week. The RE2 System now enables users to actually use their limbs and hands to control the robot’s movements and grippers to quickly defuse explosives.

RE2 CEO explained the rationale for his new product, “Often times, you still need the human intellect to perform those tasks. But they’re dangerous, so the question is, how can we project that human capability remotely, so they’re still able to do their job and leverage the human intellect to solve a really big problem? That’s what we’re trying to do — keep the human safe, but allowing them to still do their job.”

While rover looks remarkably similar to Endeavor’s (formerly iRobot) Packbot that has been widely deployed by the US military in Iraq, Afghanistan and elsewhere, the control system is novel and more reliable in the high-pressure situations. Pedersen says, “If you’re going to project that human capability, the most human way to control it is to have it be as much like you as possible. That’s where we’ve come over the past decade, having true human-like capability. It’s no coincidence that these robots look like human torsos. These systems are a projection of you, remotely. It’s almost like an avatar, where you’re dealing with a threat out of harm’s way.”

While today the operator of RE2’s robot stands at a safe distance watching the video feed on a laptop, the company is developing a virtual reality headset accessory for the control system to enable the professional to completely immerse himself into the situation. Pedersen also plans to expand the use cases for his technology to civilian markets such as search & rescue, disaster recovery (like Fukushima) and medicine.

“Yes, people could use this technology for other means. But our charter is saving lives and extending it into new markets like health care, where we can do patient assist. [We can] help a person from a wheelchair to a bed or a wheelchair to a toilet, as the brawn for a caregiver,” touts Pedersen.

While we are years away from fully trusting autonomous systems with our lives, it appears from these two examples that the first step is enabling machines to augment our most trusted citizens. As yesterday was Father’s Day it is only appropriate I share with my readers my gift – GrillBot. The disclaimer on the box, however, does make me question when I plan to use it; fear of death is kind of a big deal!

Page 341 of 348
1 339 340 341 342 343 348