News

Page 427 of 433
1 425 426 427 428 429 433

Robohub Digest 05/17: RoboCup turns 20, ICRA in Singapore, robot inspector helps check bridges

A quick, hassle-free way to stay on top of robotics news, our robotics digest is released on the first Monday of every month. Sign up to get it in your inbox.

20 years of RoboCup

20 years in the books! RoboCup, which first started in 1997, was originally established to bring forth a team of robots that could beat the human soccer World Cup champions. Twenty years on, RoboCup is so more than just a soccer competition. In fact, the competition has grown into an international movement with a variety of leagues. Teams compete against each other in four different leagues and many sub-competitions, including home, work, and rescue missions. The complexity of missions in RoboCup requires intelligent, dynamic, sensing robots that can react to chaotic and changing environments. And in its 20-year history, the competition has brought forth numerous winners who have gone on to achieve great things.

Without robot competitions like RoboCup, the field of robotics wouldn’t be where it is today. So to celebrate 20 years of RoboCup, the Federation launched a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan. You can watch all videos from the playlist here.

ICRA 2017 in Singapore

While we have RoboCup 2017 to look forward to, the IEEE 2017 International Conference on Robotics and Automation (ICRA) took place in Singapore. Under the conference theme “Innovation, Entrepreneurship, and Real-world Solutions”, the event brought together engineers, researchers, entrepreneurs and industry to address some of the major challenges of our times.

This year’s keynote speakers included Louis Phee, who described development and implementation of a surgical robot called EndoMaster, and Peter Luh, giving an overview of Industry 4.0. Alongside the speeches, the event included 25 workshops and tutorials, various exhibitions, as well as four Robot Challenges.

Self-driving cars: Competition heats up

May wasn’t just events and competitions. In the world of autonomous vehicles, French Groupe PSA, the second largest car manufacturer in Europe which owns brands such as Peugeot and Citroen, have teamed up with autonomous car maker nuTonomy. The collaboration will seek to build a self-driving Peugeot 3008, which they hope will hit the roads of Singapore in the not-too-distant future.

Groupe PSA and other well-known car manufacturers, including Ford Motors, who are just starting their bid to enter the autonomous car market, are lagging way behind Waymo (the self-driving car developed by Google’s parent Alphabet) when it comes to putting their cars on actual roads. And with Waymo about to team up with ride-hailing start-up Lyft, Google is getting ever closer to making their autonomous car part of mainstream traffic.

Another player in the self-driving car game we haven’t mentioned yet is cab service Uber. The company is locked in a dispute with Google over allegedly stolen design secrets and will likely be going to public trial later in the year. Linked to the lawsuit, Uber fired the engineer at the heart of the dispute, Anthony Levandowski, who is suspected of having stolen company secrets when he left Google and founded his own company, Otto, which was acquired by Uber last year. The information war continues.

Self-driving cars: Innovation

Meanwhile, a group of veterans previously linked with Alphabet founded their own company – DeepMap Inc. – which aims to develop systems that allow cars to navigate complex cityscapes.

It’s not just cityscapes that are complex to navigate. Considering other vehicles, cyclists, and pedestrians on busy roads form part of the difficult environment a self-driving car has to cope with due to its unpredictable nature. It will, therefore, become necessary for autonomous cars to understand and predict behaviour. Here, machine learning will be key, as Dr Nathan Griffiths explains in The Royal Society’s blog on machine learning in research.

And with so much interest and research into autonomous, intelligent cars, it’s not surprising that some, like Chris Urmson in a recent lecture at CMU, predict we will see a shift from the traditional transport model where people own their cars to a more dynamic, responsive model of “Transportation as a Service”, in which, companies own fleets of cars that can be used by anyone when and where they’re needed.

Robots in the fields

Much of the innovations that have enabled autonomous cars are transferable to industrial, commercial and agricultural vehicles. In the case of the latter, self-driving tractors and precision agribots have already increased productivity and made 24-hour autonomous, high-yield farming a possibility.

In Salinas Valley, California, robots are already used to pick lettuce, and to help vineyard owners decide when they need to water their plants. And GV (formerly Google Ventures) just invested $10 million into Abundant Robotics to build a picking robot, initially to pick apples but with potential for adaptation to support the harvest of other fruit.

It is believed the uptake of farming technologies is not progressing as quickly as it could be, due to farmers being slow to accept precision agriculture products, software, equipment and practices. But the farming (r)evolution is well underway, as agricultural robotics are already a 3 billion dollar industry set to grow to an impressive 12 billion by 2026.

Robots in the skies

While robots are still waiting to be fully accepted in agriculture, it’s no secret that the US military uses drones extensively. What came as a surprise was the strange video feed that surfaced in May from what appeared to be a drone flying over Florida’s panhandle, apparently sponsored by the National Reconnaissance Office – a body that doesn’t usually publicise its drone-related activities. The footage, now believed to have been shot in February, was likely uploaded as a demo video by a contractor.

While the Florida video caused quite a bit of confusion this month, another drone was making headlines for very different reasons. Previously used in military operations, the ArcticShark (a modified version of the military TigerShark) drone is now helping to fight climate change. And it’s now on a scientific mission to help scientists understand cloud formation and other atmospheric processes.

In other drone-related news, the Alaska Department of Transportation and Public Facilities has allowed some of its employees to receive licenses to operate drones to support projects involving roads, bridged and other structures. And a report has shown that drone funding fell by 64% in 2016 compared to 2015, with DJI dominating the market and issues such as battery life, connectivity problems and drone regulations stifling development efforts.

Robots in the water

From the skies to the sea: this May, NATO Nations agreed to use JANUS for their digital underwater communications. JANUS has the potential to make military and civilian, NATO and non-NATO, devices fully interoperable, doing away with the communication problems between systems and models by different manufacturers that have made underwater communication difficult up to this point.

Robots in the lab

Researchers at the Institute for Human and Machine Cognition in Pensacola, Florida, have developed a two-legged robot that can run without using sensors and a computer. The robot, called Planar Elliptical Runner, is stable through its physical design, which makes it different to other two or four-legged robots.

Meanwhile, a team at the University of California, Berkeley, has come up with a nimble-fingered robot that is able to pick up a wide range of objects using a 3D sensor and a deep learning neural network. It may not be perfect, but it’s the most nimble-fingered robot yet and the technology may find applications in picking and manufacturing in future.

Human-Machine Interaction

Most of the robots we interact with are practical helpers, offering support in the home, on the road or at work. Innovations some of us may have interacted with are autonomous lawn mowers, cars with self-driving features, or Amazon’s Alexa. With there plenty more robots in the pipeline.

A four-wheeled, waterproof, battery-powered robot inspector developed by a team in Nevada, may soon be supporting civil engineers and safety inspectors with bridge checks to reduce the chance of human error and omissions that could lead to a collapse such as the I-35W bridge disaster in Minnesota on 2007.

And finally, engineers in Germany have built a robot priest called BlessU-2 that can beam light from its hands and deliver blessings in five languages. The robot is meant to spark discussions about faith, the church and the potential of AI.

Learning resources: Robot Academy

And to finish off our digest for May, we wanted to highlight a new open online resource for robotics education: the Robot Academy. Developed by Professor Peter Corke and the Queensland University of Technology (QUT), the Academy offers more than 200 lessons from robot joint control architecture to limits of electric motor performance. So if you find yourself with a bit of time on your hands, why not try a Robot Academy Masterclass?

Or if you’re after something a little bit different, there’s a new toolkit on “Computational Abstractions for Interactive Design of Robotic Devices” that allows you to drag and drop parts of a virtual robot on screen without the need to know exactly what needs to connect to what as a complex physics engine ensures your robot won’t fail or fall over.

Missed any previous Digests? Click here to check them out.

Upcoming events for June – July 2017

Intelligent Ground Vehicle Competition: June 2-5, Rochester, MI.

CES Asia: June 7-9, Shanghai, China.

Unmanned Cargo Ground Vehicle Conference: June 13-14, Maaspoort, Venlo, The Netherlands.

Autonomous machines world: June 26-27, Berlin, Germany.

RoboUniverse: June 28-30, Seoul.

CIROS: July 5, 2017 – July 8, 2017, Shanghai, China. 

ASCEND Conference and Expo: July 19, 2017 – July 21, 2017, Portland, OR 

RoboCup: July 25, 2017 – July 31, 2017, Nagoya, Japan 

Call for papers:

1st Annual Conference on Robot Learning (CoRL 2017): Call for papers deadline 28 June.

Giving robots a sense of touch

A GelSight sensor attached to a robot’s gripper enables the robot to determine precisely where it has grasped a small screwdriver, removing it from and inserting it back into a slot, even when the gripper screens the screwdriver from the robot’s camera. Photo: Robot Locomotion Group at MIT

Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface.

Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.

In one paper, Adelson’s group uses the data from the GelSight sensor to enable a robot to judge the hardness of surfaces it touches — a crucial ability if household robots are to handle everyday objects.

In the other, Russ Tedrake’s Robot Locomotion Group at CSAIL uses GelSight sensors to enable a robot to manipulate smaller objects than was previously possible.

The GelSight sensor is, in some ways, a low-tech solution to a difficult problem. It consists of a block of transparent rubber — the “gel” of its name — one face of which is coated with metallic paint. When the paint-coated face is pressed against an object, it conforms to the object’s shape.

The metallic paint makes the object’s surface reflective, so its geometry becomes much easier for computer vision algorithms to infer. Mounted on the sensor opposite the paint-coated face of the rubber block are three colored lights and a single camera.

“[The system] has colored lights at different angles, and then it has this reflective material, and by looking at the colors, the computer … can figure out the 3-D shape of what that thing is,” explains Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences.

In both sets of experiments, a GelSight sensor was mounted on one side of a robotic gripper, a device somewhat like the head of a pincer, but with flat gripping surfaces rather than pointed tips.

Contact points

For an autonomous robot, gauging objects’ softness or hardness is essential to deciding not only where and how hard to grasp them but how they will behave when moved, stacked, or laid on different surfaces. Tactile sensing could also aid robots in distinguishing objects that look similar.

In previous work, robots have attempted to assess objects’ hardness by laying them on a flat surface and gently poking them to see how much they give. But this is not the chief way in which humans gauge hardness. Rather, our judgments seem to be based on the degree to which the contact area between the object and our fingers changes as we press on it. Softer objects tend to flatten more, increasing the contact area.

The MIT researchers adopted the same approach. Wenzhen Yuan, a graduate student in mechanical engineering and first author on the paper from Adelson’s group, used confectionary molds to create 400 groups of silicone objects, with 16 objects per group. In each group, the objects had the same shapes but different degrees of hardness, which Yuan measured using a standard industrial scale.

Then she pressed a GelSight sensor against each object manually and recorded how the contact pattern changed over time, essentially producing a short movie for each object. To both standardize the data format and keep the size of the data manageable, she extracted five frames from each movie, evenly spaced in time, which described the deformation of the object that was pressed.

Finally, she fed the data to a , which automatically looked for correlations between changes in contact patterns and hardness measurements. The resulting system takes frames of video as inputs and produces hardness scores with very high accuracy. Yuan also conducted a series of informal experiments in which human subjects palpated fruits and vegetables and ranked them according to hardness. In every instance, the GelSight-equipped robot arrived at the same rankings.

Yuan is joined on the paper by her two thesis advisors, Adelson and Mandayam Srinivasan, a senior research scientist in the Department of Mechanical Engineering; Chenzhuo Zhu, an undergraduate from Tsinghua University who visited Adelson’s group last summer; and Andrew Owens, who did his PhD in electrical engineering and computer science at MIT and is now a postdoc at the University of California at Berkeley.

Obstructed views

The paper from the Robot Locomotion Group was born of the group’s experience with the Defense Advanced Research Projects Agency’s Robotics Challenge (DRC), in which academic and industry teams competed to develop control systems that would guide a humanoid robot through a series of tasks related to a hypothetical emergency.

Typically, an autonomous robot will use some kind of computer vision system to guide its manipulation of objects in its environment. Such systems can provide very reliable information about an object’s location — until the robot picks the object up. Especially if the object is small, much of it will be occluded by the robot’s gripper, making location estimation much harder. Thus, at exactly the point at which the robot needs to know the object’s location precisely, its estimate becomes unreliable. This was the problem the MIT team faced during the DRC, when their robot had to pick up and turn on a power drill.

“You can see in our video for the DRC that we spend two or three minutes turning on the drill,” says Greg Izatt, a graduate student in electrical engineering and computer science and first author on the new paper. “It would be so much nicer if we had a live-updating, accurate estimate of where that drill was and where our hands were relative to it.”

That’s why the Robot Locomotion Group turned to GelSight. Izatt and his co-authors — Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering; Adelson; and Geronimo Mirano, another graduate student in Tedrake’s group — designed control algorithms that use a computer vision system to guide the robot’s gripper toward a tool and then turn location estimation over to a GelSight sensor once the robot has the tool in hand.

In general, the challenge with such an approach is reconciling the data produced by a vision system with data produced by a tactile sensor. But GelSight is itself camera-based, so its data output is much easier to integrate with visual data than the data from other tactile sensors.

In Izatt’s experiments, a robot with a GelSight-equipped gripper had to grasp a small screwdriver, remove it from a holster, and return it. Of course, the data from the GelSight sensor don’t describe the whole screwdriver, just a small patch of it. But Izatt found that, as long as the vision system’s estimate of the screwdriver’s initial position was accurate to within a few centimeters, his algorithms could deduce which part of the screwdriver the GelSight sensor was touching and thus determine the screwdriver’s position in the robot’s hand.

“I think that the GelSight technology, as well as other high-bandwidth tactile sensors, will make a big impact in robotics,” says Sergey Levine, an assistant professor of electrical engineering and computer science at the University of California at Berkeley. “For humans, our sense of touch is one of the key enabling factors for our amazing manual dexterity. Current robots lack this type of dexterity and are limited in their ability to react to surface features when manipulating objects. If you imagine fumbling for a light switch in the dark, extracting an object from your pocket, or any of the other numerous things that you can do without even thinking — these all rely on touch sensing.”

“Software is finally catching up with the capabilities of our sensors,” Levine adds. “Machine learning algorithms inspired by innovations in deep learning and computer vision can process the rich sensory data from sensors such as the GelSight to deduce object properties. In the future, we will see these kinds of learning methods incorporated into end-to-end trained manipulation skills, which will make our robots more dexterous and capable, and maybe help us understand something about our own sense of touch and motor control.”

VENTURER driverless car project publishes results of first trials

VENTURER is the first Connected and Autonomous Vehicle project to start in the UK. The results of VENTURER preliminary trials show that the handover process is a safety critical issue in the development of Autonomous Vehicles (AVs).

The first VENTURER trials set out to investigate ‘takeover’ (time taken to reengage with vehicle controls) and ‘handover’ (time taken to regain a baseline/normal level of driving behaviour and performance) when switching frequently between automated and manual driving modes within urban and extra-urban settings. This trial is believed to be the first to directly compare handover to human driver-control from autonomous mode in both simulator and autonomous road vehicle platforms.

The handover process is important from a legal and insurance perspective – the length of time it takes people to regain full control of the vehicle represents a meaningful risk to insurers and understanding when control is transferred between the vehicle and the driver has liability implications.

David Williams from AXA outlined that, “The results of this trial have been very useful as we consider the issues that the handover process raises for insurers. Although some motor manufacturers have said they will skip SAE Level 3, some are progressing with vehicles that will require the driver to take back control of the vehicle. The insurance industry will need to assess the relative safety of the handover systems as they come to market but VENTURER’s trial 1 results show that with robust testing we can properly assess how humans and autonomous vehicles interact during this crucial phase of the technologies’ evolution.”

VENTURER designed, tested and analysed both simulator and road vehicle-based handover trials.

50 participants were tested in a simulator and/or in the autonomous vehicle on roads on UWE Bristol campus. The tests were at speeds of 20, 30, 40 and 50 mph in the simulator and 20 mph in the autonomous vehicle; speeds common in urban and extra-urban settings. Baseline driving behaviour of participants was also tested, and then the length of time it took them to return to this baseline following handover.

During the trial, the driver was aware that they might be alerted to take control of the vehicle at any moment, either due to the decisions made by the driver, or the capabilities of the vehicle in particular situations. VENTURER has classified this as planned handover.

The 20- and 30- mph scenarios involved town/city urban driving and the 40- and 50- mph scenarios involved outer-town/city extra-urban driving. Driving speed, lateral lane position, and braking behaviour (amongst other measures) were taken.

A key finding is that it took 2-3 seconds for participants to ‘takeover’ manual controls and resume active driving after short periods of autonomous driving in urban environments.

They also found that participants drove more slowly than the recommended speed limit for up to 55 seconds following a handover request, which suggests more cautious, but not necessarily safer, driving. This could be important for traffic management – if drivers on the road replicated this behaviour it might impact the flow of traffic and mitigate some of the predicated benefits of AVs.

Image: Local World

In addition, participants returned to their baseline manual driving behaviour after handover within 10-20 seconds, with most measures including speed, stabilising after 20-30 seconds. This was not the case within the highest speed simulator condition where stabilisation did not seem to occur on most measures within the 55 second measured handover period.

The team says these results have implications for the designers of autonomous vehicles with handback functionality, for example, in terms of phased handover systems. The results also inform the emerging market for insurance for autonomous vehicles.

Chair of the project, Lee Woodcock (Atkins) said, “The outcome of this research for trial one is significant and must provide food for thought as the market develops for driverless cars and how we progress through the different levels of automation. Further research must also explore interaction not just between vehicles but also with network operations and city management systems.”

Dr Phillip Morgan (UWE Bristol) said, “Designers need to proceed with caution and consider human performance under multiple driving conditions and scenarios in order to plot accurate takeover and handover time safety curves. In the time it takes for drivers to reach their baseline behaviour, the vehicle may have travelled some distance, depending on the speed. These initial trials show that there are some risk elements in the handover process and bigger studies with more participants may be needed to ensure there is sufficient data to build safe handover support systems.”

Professor Graham Parkhurst (UWE Bristol) said, “The results of these tests suggest that autonomous vehicles on highways should slow to a safe speed before handover is attempted. Further research is required to clarify what that safe speed is, but it would be substantially slower than the 70 mph motorway limit, and somewhat lower than the highest speed (50 mph) considered in our simulator trials.”

The trial clearly demonstrated that there were no major differences between control of the simulator and the Wildcat platforms used within the trial, validating the future use of simulators for the development of autonomous vehicles and associated technologies.

Click here to view the full results and papers.

Two stars, different fates

Levandowski (right) at MCE 2016. Source: Wikipedia Commons

Andy Rubin, who developed the Android operating system at Google then went on to lead Google through multiple acquisitions into robotics, has launched a new consumer products company. Anthony Levandowski, who, after many years with Google and their autonomous driving project, launched Otto which Uber acquired, was sued by Google, and just got fired by Uber.

People involved in robotics – from the multi-disciplined scientists turned entrepreneurs to all the specialists and other engineers involved in any aspect of the industry of making things robotic – are a relatively small community. Most people know (or know of) most of the others, and get-togethers like ICRA, the IEEE International Conference on Robotics and Automation, being held this week in Singapore, are an opportunity to meet new up-and-coming talent as they present their papers and product ideas and mingle with older, more established players. Two of those players made headline news this week: Rubin, launching Essential, and Levandowski, getting fired.

Andy Rubin

Rubin came to Google in 2005 when they acquired Android and left in 2014 to found an incubator for hardware startups, Playground Global. While at Google Rubin became an SVP of Mobile and Digital content including the open-source smartphone operating system Android and then started Google’s robotics group through a series of acquisitions. Android can be found in more than 2 billion phones, TVs, cars and watches.

2008 Google Developer Day in Japan – Press Conference: Andy Rubin

In 2007, Rubin was developing his own version of a smartphone at Google, also named Android, when Apple launched their iPhone, a much more capable and stylish device. Google’s product was scrapped but their software was marketed to HTC and their phone became Google’s first Android-based phone. The software was similar enough to Apple’s that Steve Jobs was furious and, as reported in Fred Vogelstein’s ‘Dogfight: How Apple and Google Went to War and Started a Revolution,’ called Rubin a “big, arrogant f–k” and “everything [he’s doing] is a f–king rip-off of what we’re doing.”

Jobs had trusted Google’s cofounders, Larry Page and Sergey Brin and Google’s CEO Eric Schmidt who was on Apple’s board. All three had been telling Jobs about Android, but they kept telling him it would be different from the iPhone. He believed them until he actually saw the phone and its software and how similar it was to the iPhone’s, whereupon he insisted Google make a lot of changes and removed Schmidt from Apple’s board. Rubin was miffed and had a sign on his office white board that said “STEVE JOBS STOLE MY LUNCH MONEY.”

Quietly, stealthily, Rubin went about creating “a new kind of company using 21st-century methods to build products for the way people want to live in the 21st century.” That company is Essential and Essential just launched and is taking orders for its new $699 phone and a still-stealthy home assistant to compete with Amazon’s Echo and Google’s Home devices.

Wired calls the new Essential Phone “the anti-iPhone.” The first Phones will ship in June.

Anthony Levandowski

In 2004, Levandowski and a team from UC Berkeley built and entered an autonomous motorcycle into the DARPA Grand Challenge. In 2007 he joined Google to work with Sebastian Thrun on Google Street View. Outside of Google he started a mobile mapping company that experimented with LiDAR technology and another to build a self-driving LiDAR-using a Prius. Google acquired both companies including their IP.

In 2016 Levandowski left Google to found Otto, a company making self-driving kits to retrofit semi-trailer trucks. Just as the kit was launched, Uber acquired Otto and Levandowski became the head of Uber’s driverless car operation in addition to continuing his work at Otto.

Quoting Wikipedia,

According to a February 2017 lawsuit filed by Waymo, the autonomous vehicle research subsidiary of Alphabet Inc, Levandowski allegedly “downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation” before resigning to found Otto.

In March 2017, United States District Judge William Haskell Alsup, referred the case to federal prosecutors after Levandowski exercised his Fifth Amendment right against self-incrimination. In May 2017, Judge Alsup ordered Levandowski to refrain from working on Otto’s LiDAR and required Uber to disclose its discussions on the technology. Levandowski was later fired by Uber for failing to cooperate in an internal investigation.

The Uncanny Valley of human-robot interactions

The device named “Spark” flew high above the man on stage with his hands waving in the direction of the flying object. In a demonstration of DJI’s newest drone, the audience marveled at the Coke can-sized device’s most compelling feature: gesture controls. Instead of a traditional remote control, this flying selfie machine follows hand movements across the sky. Gestures are the most innate language of mammals, and including robots in our primal movements means we have reached a new milestone of co-existence.

Madeline Gannon of Carnegie Mellon University is the designer of Mimus, a new gesture controlled robot featured in an art installation at The Design Museum in London, England. Gannon explained: “In developing Mimus, we found a way to use the robot’s body language as a medium for cultivating empathy between museum-goers and a piece of industrial machinery. Body language is a primitive yet fluid means of communication that can broadcast an innate understanding of the behaviors, kinematics and limitations of an unfamiliar machine.” Gannon wrote about her experiences recently in the design magazine Dezeen: “In a town like Pittsburgh, where crossing paths with a driverless car is now an everyday occurrence, there is still no way for a pedestrian to read the intentions of the vehicle…it is critical that we design more effective ways of interacting and communicating with them.”

So far, the biggest commercially deployed advances in human-robot interactions have been conversational agents by Amazon, Google and Apple. While natural language processing has broken new ground in artificial intelligence, the social science of its acceptability in our lives might be its biggest accomplishment. Japanese roboticist Masahiro Mori described the danger of making computer generated voices too indistinguishable from humans as the “uncanny valley.” Mori cautioned inventors from building robots that are too human sounding (and possibly looking) as the result elicits negative emotions best described as “creepy” and “disturbing.”

Recently, many toys have embraced conversational agents as a way of building greater bonds and increasing the longevity of play with kids. Barbie’s digital speech scientist, Brian Langner of ToyTalk, detailed his experiences with crossing into the “Uncanny Valley” as: “Jarring is the way I would put it. When the machine gets some of those things correct, people tend to expect that it will get everything correct.”

Kate Darling of MIT’s Media Lab, whose research centers on human-robot interactions, suggested that “if you get the balance right, people will like interacting with the robot, and will stop using it as a device and start using it as a social being.”

This logic inspired Israeli startup Intuition Robotics to create ElliQ—a bobbing head (eyeless) robot. The purpose of the animatronics is to break down barriers between its customer base of elderly patients and their phobias of technology. According to Intuition Robotics’ CEO, Dor Skuler, the range of motion coupled with a female voice helps create a bond between the device and its user. Don Norman, usability designer of ElliQ, said: “It looks like it has a face even though it doesn’t. That makes it feel approachable.”

Mayfield Robotics decided to add cute R2D2-like sounds to its newest robot, Kuri. Mayfield hired former Pixar designers Josh Morenstein and Nick Cronan of Branch Creative with the sole purpose of making Kuri more adorable. To accomplish this mission, Morenstein and Cronan gave Kuri eyes, but not a mouth as that would be, in their words “creepy.” Conan shares the challenges with designing the eyes: “Just by moving things a few millimeters, it went from looking like a dumb robot to a curious robot to a mean robot. It became a discussion of, how do we make something that’s always looking optimistic and open to listen to you?” Kuri has a remarkable similarity to Morenstein and Cronan’s former theatrical robot, EVA.

In the far extreme of making robots act and behave human, RealDoll has been promoting six thousand dollar sex robots. To many, RealDoll has crossed the “Uncanny Valley” of creepiness with sex dolls that look and talk like humans. In fact, there is a growing grassroots campaign to ban RealDoll’s products globally, as it endangers the very essence of human relationships. Florence Gildea writes on the organization’s blog: “The personalities and voices that doll owners project onto their dolls is pertinent for how sex robots may develop, given that sex doll companies like RealDoll are working on installing increasing AI capacities in their dolls and the expectation that owners will be able to customize their robots’ personalities.” The example given is how the doll expresses her “feelings” for her owner on Twitter:

Obviously a robot companion has no feelings, however it is a projection of the doll owners’. “To anthropomorphize their dolls to sustain the fantasy that they have feelings for the owner. The Twitter accounts seemingly manifest the dolls’ independent existence so that their dependence on their owners can seem to signify their emotional attachment, rather than it following inevitable from their status as objects. Immobility, then, can be misread as fidelity and devotion.” The implications of this behavior is that their female companion, albeit mechanical, enjoys “being dominated.” The fear that the Campaign Against Sex Robots expresses is the objectification of women (even robotic ones) reinforces problematic human sexual stereotypes.

Today, with technology at our fingertips, there is growing phenomena of preferring one-directional device relationships over complicated human encounters. MIT Social Sciences Professor Sherry Turkle writes in her essay, Close Engagements With Artificial Companionship, that “over-stressed, overworked, people claim exhaustion and overload. These days people will admit they’d rather leave a voice mail or send an email than talk face-to-face. And from there, they say: ‘I’d rather talk to the robot. Friends can be exhausting. The robot will always be there for me. And whenever I’m done, I can walk away.’”

In the coming years humans will communicate more with robots in their lives from experiences in the home to the office to their leisure time. The big question will be not the technical barriers, but the societal norms that will evolve to accept Earth’s newest species.

“What do we think a robot is?” asked robot designer Norm. “Some people think it should look like an animal or a person, and it should move around. Or it just has to be smart, sense the environment, and have motors and controllers.”

Norm’s answer, like beauty, could be in the eye of the beholder.

Wearable system helps visually impaired users navigate

New algorithms power a prototype system for helping visually impaired users avoid obstacles and identify objects. Courtesy of the researchers.

Computer scientists have been working for decades on automatic navigation systems to aid the visually impaired, but it’s been difficult to come up with anything as reliable and easy to use as the white cane, the type of metal-tipped cane that visually impaired people frequently use to identify clear walking paths.

White canes have a few drawbacks, however. One is that the obstacles they come in contact with are sometimes other people. Another is that they can’t identify certain types of objects, such as tables or chairs, or determine whether a chair is already occupied.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system that uses a 3-D camera, a belt with separately controllable vibrational motors distributed around it, and an electronically reconfigurable Braille interface to give visually impaired users more information about their environments.

The system could be used in conjunction with or as an alternative to a cane. In a paper they’re presenting this week at the International Conference on Robotics and Automation, the researchers describe the system and a series of usability studies they conducted with visually impaired volunteers.

“We did a couple of different tests with blind users,” says Robert Katzschmann, a graduate student in mechanical engineering at MIT and one of the paper’s two first authors. “Having something that didn’t infringe on their other senses was important. So we didn’t want to have audio; we didn’t want to have something around the head, vibrations on the neck — all of those things, we tried them out, but none of them were accepted. We found that the one area of the body that is the least used for other senses is around your abdomen.”

Katzschmann is joined on the paper by his advisor Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science; his fellow first author Hsueh-Cheng Wang, who was a postdoc at MIT when the work was done and is now an assistant professor of electrical and computer engineering at National Chiao Tung University in Taiwan; Santani Teng, a postdoc in CSAIL; Brandon Araki, a graduate student in mechanical engineering; and Laura Giarré, a professor of electrical engineering at the University of Modena and Reggio Emilia in Italy.

Parsing the world

The researchers’ system consists of a 3-D camera worn in a pouch hung around the neck; a processing unit that runs the team’s proprietary algorithms; the sensor belt, which has five vibrating motors evenly spaced around its forward half; and the reconfigurable Braille interface, which is worn at the user’s side.

The key to the system is an algorithm for quickly identifying surfaces and their orientations from the 3-D-camera data. The researchers experimented with three different types of 3-D cameras, which used three different techniques to gauge depth but all produced relatively low-resolution images — 640 pixels by 480 pixels — with both color and depth measurements for each pixel.

The algorithm first groups the pixels into clusters of three. Because the pixels have associated location data, each cluster determines a plane. If the orientations of the planes defined by five nearby clusters are within 10 degrees of each other, the system concludes that it has found a surface. It doesn’t need to determine the extent of the surface or what type of object it’s the surface of; it simply registers an obstacle at that location and begins to buzz the associated motor if the wearer gets within 2 meters of it.

Chair identification is similar but a little more stringent. The system needs to complete three distinct surface identifications, in the same general area, rather than just one; this ensures that the chair is unoccupied. The surfaces need to be roughly parallel to the ground, and they have to fall within a prescribed range of heights.

Tactile data

The belt motors can vary the frequency, intensity, and duration of their vibrations, as well as the intervals between them, to send different types of tactile signals to the user. For instance, an increase in frequency and intensity generally indicates that the wearer is approaching an obstacle in the direction indicated by that particular motor. But when the system is in chair-finding mode, for example, a double pulse indicates the direction in which a chair with a vacant seat can be found.

The Braille interface consists of two rows of five reconfigurable Braille pads. Symbols displayed on the pads describe the objects in the user’s environment — for instance, a “t” for table or a “c” for chair. The symbol’s position in the row indicates the direction in which it can be found; the column it appears in indicates its distance. A user adept at Braille should find that the signals from the Braille interface and the belt-mounted motors coincide.

In tests, the chair-finding system reduced subjects’ contacts with objects other than the chairs they sought by 80 percent, and the navigation system reduced the number of cane collisions with people loitering around a hallway by 86 percent.

May 2017 fundings, acquisitions, IPOs and failures

May 2017 had two robotics-related companies get $9.5 billion in funding and 22 others raised $249 million. Acquisitions also continued to be substantial with Toyota Motor’s $260 million acquisition of Bastian Solutions plus three others (where the amounts weren’t disclosed).

Fundings

  1. Didi Chuxing, the Uber of China, raised $5.5 billion in a round led by SoftBank with new investors Silver Lake Kraftwerk joining previous investors SoftBank, China Merchants Bank and Bank of Communications. According to TechCrunch, this latest round brings the total raised by DiDi to about $13 billion. Uber, by comparison, has raised $8.81 billion.
  2. Nvidia Corp, a Santa Clara, CA-based speciality GPU maker, raised $4 billion (representing a 4.9% stake in the company) according to Bloomberg. Nvidia’s newest chips are focused on providing power for deep learning for self-driving vehicles.
  3. ClearMotion, a Woburn, MA automotive technology startup that’s building shock absorbers with robotic, software-driven adaptive actuators for car stability, has raised $100 million in a Series C round led by a group of JP Morgan clients and NEA, Qualcomm Ventures and more.
  4. Echodyne, a Bellevue, WA developer of radar vision technology used in drones and self-driving cars, has raised $29 million in a Series B round led by New Enterprise Associates and joined by Bill Gates, Madrona Venture Group, and others.
  5. DeepMap, a Silicon Valley mapping startup, raised $25 million in a round led by Accel and included GSR Ventures and Andreessen Horowitz.

    “Autonomous vehicles are tempting us with a radically new future. However, this level of autonomy requires a highly sophisticated mapping and localization infrastructure that can handle massive amounts of data. I’m very excited to work with the DeepMap team, who have the requisite expertise in mapping, vision, and large scale operations, as they build the core technology that will fuel the next generation of transportation,” said Martin Casado, general partner at Andreessen Horowitz.

  6. Hesai Photonics Technology, a transplanted Silicon Valley-to-Shanghai sensor startup, raised $16 million in a Series A round led by Pagoda Investment with participation from Grains Valley Venture Capital, Jiangmen Venture Capital and LightHouse Capital Management. Hesai is developing a hybrid LiDAR device for self-driving cars. Hesai has already partnered with a number of autonomous driving technology and car companies including Baidu’s Chinese electric vehicle start-up NIO and self-driving tech firm UiSee.
  7. Abundant Robotics, a Menlo Park, CA-based automated fruit-picking tech developer, raised $10 million in venture funding. GV (Google Ventures) led the round, and was joined by BayWa AG and Tellus Partners. Existing partners Yamaha Motor Company, KPCB Edge, and Comet Labs also participated.
  8. TriLumina Corp., an Albuquerque, NM-based developer of solid-state automotive LiDAR illumination for ADAS and autonomous driving, closed a $9 million equity and debt financing. Backers included new investors Kickstart Seed Fund and existing stakeholders Stage 1 Ventures, Cottonwood Technology Fund, DENSO Ventures and Sun Mountain Capital.
  9. Bowery Farming, a NYC indoor vertical farm startup, raised $7.5 million (in February) in a seed round led by First Round Capital and including Box Group, Homebrew, Flybridge, Red Swan, RRE, Lerer Hippeau Ventures, and Tom Colicchio – a restauranteur and judge on reality cooking show, Top Chef.
  10. Taranis, an Israel-based precision agriculture intelligence platform raised $7.5 million in Series A funding. Finistere Ventures led the round, and was joined by Vertex Ventures. Existing investors Eshbol Investments, Mindset Ventures, OurCrowd, and Eyal Gura participated.
  11. Ceres Imaging, the Oakland, CA aerial imagery and analytics company, raised a $5 million Series A round of funding led by Romulus Capital.
  12. Stanley Robotics, a Paris-based automated valet parking service developer, raised $4 million in funding. Investors included Elaia Partners, Idinvest Partners and Ville de Demain. Stanley’s new parking robot is a mobile car-carrying lift that moves and tightly parks cars in outdoor locations.
  13. AIRY3D Inc, a Canadian start-up in 3D computer vision, raised $3.5 million in a seed round co-led by CRCM Ventures and R7 Partners. Other investors include WI Harper Group, Robert Bosch Venture Capital, Nautilus Venture Partners and several angel investors that are affiliates of TandemLaunch, the Montreal-based incubator that spun out AIRY3D.
  14. SkyX Systems, a Canada-based unmanned aircraft system developer, raised around $3 million in funding from Kuang-Chi Group.
  15. Catalia Health, a San Francisco-based patient care management company applying robotics to improve personal health, raised $2.5 million in funding. Khosla Ventures led the round.
  16. vHive, an Israeli startup developing software to operate autonomous drone fleets, raised $2 million (in April) in an A round led by StageOne VC and several additional private investors.
  17. Vivacity Labs, a London AI tech and sensor startup, raised $2 million from Tracsis, Downing Ventures and the London Co-Investment Fund and was also granted an additional $1.3 million from Innovate UK to create sensors with built-in machine learning to identify individual road users and manage traffic accordingly.
  18. Bluewrist, a Canadian integrator of vision systems, raised around $1.5 million (in February) from Istuary Toronto Capital.
  19. American Robotics, a Boston-based commercial farming drone system and analytics developer, raised $1.1 million in seed funding. Investors included Brain Robotics Capital.
  20. Kubo, a Danish educational robot startup, raised around $1 million from the Danish Growth Fund. Kubo is an educational robot that helps kids learn coding, math, language and music in a screenless, tangible environment.
  21. Zeals, a Japanese startup which produces interaction software for robots such as Palmi and Sota, has closed a $720k investment from Japanese adtech firm FreakOut Holdings.
  22. Kitty Hawk, a San Francisco drone platform startup, raised $600k in seed money in March from The Flying Object VC.
  23. Kraken Sonar, a Newfoundland marine tech startup, raised around $500k from RDC, a provincial Crown corporation responsible for improving Newfoundland and Labrador’s research and development. The funding will be used to develop the ThunderFish program which will combine smart sonar, laser and optical sensors, advanced pressure tolerant battery and thruster technologies and cutting edge artificial intelligence algorithms integrated onboard a cost effective AUV capable of 20,000 foot depths.
  24. Motörleaf, a Canadian ag sensor, communications and software startup, raised an undisclosed amount in a seed round (in March).

Acquisitions

  1. Toyota Motor Corp paid $260 million to acquire Bastian Solutions, a U.S.-based materials handling systems integrator. Toyota is the world’s No. 1 forklift truck manufacturer in terms of global market share. With this acqisition Toyota is making a “full-scale entry” into the North American logistics technology sector and will also use Bastian’s systems to make its own global supply chain more efficient.
  2. Ctrl.Me Robotics, a Hollywood, CA drone startup, was acquired by Snap, Inc. for “an amount less than $1 million.” Ctrl Me developed a system for capturing movie-quality aerial video but was recently winding down its operations. Snap acquired its assets and technology as well as talent. Snap was already using one of Ctrl.me’s products: Spectacles, which captures video for Snap’s mobile app.
  3. Applied Research Associates, Inc. (ARA), an employee-owned scientific research and engineering company, acquired Neya Systems LLC on April 28, 2017. Neya Systems LLC is known for their development of unmanned systems for defense, homeland security, and commercial users. Terms of the deal were not disclosed.
  4. Trimble has acquired Müller-Elektronik and all its subsidiary companies, for an undisclosed amount. Müller is a German manufacturer and integrator of farm implement controls, steering kits and precision farming solutions. The transaction is expected to close in the Q3 2017. Financial terms were not disclosed. Müller was key in the development of the ISOBUS communication protocol found in most tractors and towed implements, which allows one terminal to control several implements and machines, regardless of manufacturer.

IPOs

  1. Gamma 2 Robotics, a security robot maker, launched a $6 million private offering to accredited investors.
  2. Aquabotix, a Fall River, MA-headquartered company, raised $5.5 million from their IPO of UUV (ASX:UUV) on the Australian Securities Exchange (ASX). Aquabotix manufactures commercial and industrial underwater drone/camera systems and has shipped over 350 units worldwide.

Failures

  1. FarmLink LLC
  2. EZ Robotics (CN)

Europe regulates robotics: Summer school brings together researchers and experts in robotics

After a successful 2016 first edition, our next summer school cohort on The Regulation of Robotics in Europe: Legal, Ethical and Economic Implications will take place in Pisa at the Scuola Sant’Anna, from 3- 8 July.

When the Robolaw project came to an end – and we presented our results before the European Parliament – we clearly perceived that a leap was needed not only in some approaches to regulation but also in the way social scientists, as well as engineers, are trained.

Indeed, in order to undergo technical analysis in law and robotics, without being lured into science fiction, an adequate understanding of the peculiarities of the systems being studied is required. A bottom-up approach, like the one adopted by Robolaw and its guidelines, is essential.

Social scientists, and lawyers in particular, often lack such knowledge and thus tend to either make unreasonable assumptions – of technological developments that are farfetched or simply unrealistic – or misperceive what the pivoting point in the analysis is going to be. The notion of autonomy is a fitting example. The consequence, however, is not simply bad scientific literature, but potentially inadequate policies being developed, thence wrong decision – even legislative ones – being adopted, impacting research and development of new applications, while overlooking relevant issues and impairments.

Similarly, engineers working in robotics are often confronted with philosophical and legal debates involving the devices they research that they are not always equipped to understand. Those debates are instead precious for they allow to identify societal concerns and expectations that can be used to orient research strategically, and engineers ought also participate and have a say.

Ultimately, it is everybody’s interest to better address existing and emerging needs, fulfilling desires and avoiding eliciting – often ungrounded – fears. This is what the European Union understands as Responsible Research and Innovation, but it also the prerequisite for the diffusion of new applications in society and the emergence of a sound robotic industry. Moreover, the current tendency in EU regulation favouring by design approaches – whereby privacy or other rights need to be enforced through the very functioning of the device – require technicians to consider such concerns early on, during the development phase of their products.

A common language needs thus be created, to avoid a babel-tower effect, that preserves each one’s peculiarities and specificities, yet allowing close cooperation.

A multidisciplinary approach, grounded in philosophy – ethics in particular –, law – and law and economics methodologies – economics and innovation management, and engineering is required.

With that idea in mind, we competed and won a Jean Monnet grant – a prestigious funding action of the EU Commission, mainly directed towards the promotion of education and teaching activities – with a project titled: Europe Regulates Robotics and organized the first edition of the Summer School The Regulation of Robotics in Europe: Legal, Ethical and Economic Implications in 2016.

The opening event of the Summer School saw the participation of MEP (Member of the EU Parliament) Mady Delvaux-Stehres, who presented what was then the draft recommendation – now approved – of the EU Parliament on the Civil Law Rules of Robotics, Mihalis Kritikos – a Policy Analyst of the Parliament –, who personally contributed to the drafting of that document, Maria Chiara Carrozza – former Italian minister of University Education and Research, professor of robotics and member of the Italian Senate – discussing Italian political initiatives.  We also had entrepreneurs, such as Roberto Valleggi, and engineers coming from the industry, such as Arturo Baroncelli – from Comau – and academia, Fabio Bonsignorio, who also taught the course.

Classes dealt with methodologies – the Robolaw approach – notions of autonomy, liability – and different liability models – privacy, benchmarking and robot design, machine ethics and human enhancement through technology, innovation management and technology assessment. Students also had the chance to visit the Biorobotics Institute laboratories in Pontedera (directed by Prof. Paolo Dario) and see many different applications and research being carried out, directly explained by the people who work on them.

The most impressive part was, however, our class. We managed to put together a truly international group of young and bright minds, many of which already enrolled in a PhD program – in law, philosophy, engineering and management – coming from Universities such as Edinburgh, London School of Economics, Sorbonne, Cambridge, Vienna, Bologna, Suor Orsola, Bicocca, Milan, Hannover, Pisa, Pavia and Freiburg. Other came from prominent European robotic industries, were practitioners, entrepreneurs, policy makers from EU institutions.

At the end of the Summer School, some presented their research on a broad set of extremely interesting topics, such as driverless car liability and innovation management, machine ethics and the trolley problem, anthrobotics and decision-making in natural and artificial systems.

We had vivid in class debates. A true community was created that is still in contact today. Five of our students actively participated in the 2017 European Robotics Forum in Edinburgh and more are working and publishing on such matters.

We can say we truly achieved our goal! However, the challenge has just begun. We want to reach out to more people and replicate this successful initiative. A second edition of the Summer School will take place again this year in Pisa at the Scuola Sant’Anna from July 3rd to 8th and we are accepting applications until June 7th.

I am certain we will manage again to put together an incredible group of individuals, and I can’t wait to meet our new class. On our side, we are preparing a lot of interesting surprises for them, including the participation of policy makers involved in the regulation of robotics at EU level to provide a first-hand look at what is happening in the EU.

More information about the summer school can be found on our website here.

Registration to the summer school can be found here.

Researcher to develop bio-inspired ‘smart’ knee for prosthetics

A researcher at the University of the West of England (UWE Bristol) is developing a bio-inspired ‘smart’ knee joint for prosthetic lower limbs. Dr Appolinaire Etoundi, based at Bristol Robotics Laboratory, is leading the research and will analyse the functions, features and mechanisms of the human knee in order to translate this information into a new bio-inspired procedure for designing prosthetics.

Dr Etoundi gained his PhD in bio-inspired technologies from the University of Bristol where he developed a design procedure for humanoid robotic knee joints.  He is now turning his attention to nature, a growing area in robotics known as Bio-mimicry, combining curiosity about how biological systems work with solving complex engineering problems, in order to develop a prototype smart knee joint for prosthetics.

Andy Lewis, a Paralympic Triathlon Gold Medallist (Rio 2016), who wears a lower limb prosthetic, will try out the new joint once developed, to compare its energy consumption and gait efficiency to current prosthetics. There are currently approximately 100,000 knee replacement operations performed every year in the UK.  Lower limb amputation has a profound effect on daily life, and prosthesis must be comfortable and adapted to people so they can maintain daily activities such as walking and running.

Looking for inspiration in nature, Dr Etoundi will examine how the human knee works, as well as looking closely at the design of knee replacements used in surgery and at current knee joints in prosthetic limbs.  These three areas of knowledge will inform a procedure for designing a knee that could give greater, more responsive movement, while offering the control and intelligence that comes from robotics.

Dr Etoundi says, “I have spent years designing knee joints for humanoid robots, but the human knee has evolved over millions of years and is incredibly successful.  The human knee is a very complex joint with ligaments, which guide the motion of the knee, and bones that perform the motion.  Current mechanisms in prosthetic knees have a straightforward pin joint with ball bearings that does not have the sophisticated range of motion and stability of the human knee with its cruciate ligaments.

“The complex interaction between the soft tissue (ligaments) and the bones in the knee joint is an area that has yet to be replicated in prosthetics.  We need to understand this better in order to provide a better knee joint for people to use. I will study the different mechanisms within the knee joint and look for ways to translate its beneficial functionalities into a design concept for prosthetics.

“I want to create a prosthetic knee that will give the greatest range of motion with the least friction, enabling walking, climbing stairs, squatting and stability, while also offering important attributes of current prosthetics and the benefits of robotic technology.”

Andy Lewis, who will try out Dr Etoundi’s nature-inspired design says, “I was pleased when Appo approached me. He understands the importance of a good prosthetic for sports people, and it will be interesting to see what he discovers that might make a better prosthetic which is more responsive.  I am looking forward to seeing his early designs next year and trying them out.”

The research team includes Professor Richie Gill (University of Bath), Dr Ravi Vaidyanathan (Imperial College London) and Dr Michael Whitehouse (University of Bristol).

Dr Etoundi is a Senior Lecturer in Mechatronics at UWE Bristol and is a member of the Medical Robotics group at Bristol Robotics Laboratory, which looks at the application of robotic technology in human-controlled and surgical applications.

 

Artificial intelligence: Europe needs a “human in command approach,” says EESC

Credit: EESC

The EU must pursue a policy that ensures the development, deployment and use of artificial intelligence (AI) in Europe in favor, and not conducive to the detriment, acts of society and social welfare, the Committee said in an initiative opinion on the social impact of AI which 11 fields are identified for action.

“We have a human need in-command approach to artificial intelligence, with machines remain machines and people always maintain control of these machines” said rapporteur Catelijne Muller (NL – Workers’ Group). She was not just about technical check: “People can and must decide whether, when and how AI is used in our daily lives, what tasks we entrust to AI and how transparent and ethical is all. Eventually, we decide whether we want that certain activities are carried out, care or medical decisions are made by AI, and whether we want to accept that our AI security, privacy and autonomy might be jeopardized,” said Mrs. Muller.

Artificial intelligence has recently undergone rapid growth. The size of the market for AI is approximately USD 664 million and is expected to increase to 38.8 billion USD in 2025. It is virtually undisputed that artificial intelligence can have great social benefits: consider applications for sustainable agriculture, environmentally friendly production, safer traffic safety work, a safer financial system, better medicine and more. Artificial intelligence can even possibly contribute to the eradication of disease and poverty.

But the benefits of AI can only be realized as well as the challenges to be addressed related to it. The Committee has identified 11 areas in which AI ensures social challenges, ranging from ethics, security, transparency, privacy and standards, employment, education, (in) equality and inclusiveness, legislation, governance and democracy, besides warfare and super intelligence.

These challenges can not be filed with the industry, this is a matter for governments, social partners, scientists and companies together. The EESC believes that the EU should adopt policy frameworks herein and should play a global leadership role. “We have pan-European norms and standards required for AI, as we now have for food and household appliances. We need a pan-European code of ethics to ensure that remain compatible AI systems with the principles of dignity, integrity, freedom and cultural and gender diversity, as well as basic human rights, “said Catelijne Muller,” and we have employment strategies are needed to maintain or create jobs and ensure that employees remain autonomous and take pleasure in their work. “

The issue of the impact of AI on employment is indeed central to the debate on AI in Europe, where unemployment is still high because of the crisis. Although predictions about the number of jobs over the next 20 years will be lost due to the use of AI vary from a small 5% to a disastrous 100%, resulting in a society without jobs, the rapporteur believes, based on a recent report McKinsey that it is more likely that parts or parts of jobs, and not a complete job, will be taken over by AI. In this case, it comes down to education, lifelong learning and training, to ensure that workers benefit from these developments and not be victimized.

The EESC opinion also calls for a European AI infrastructure with open source privacy-respecting and learning environments, real-life test environments and high-quality data sets for training and development of AI systems. Artificial intelligence has been mainly developed by the “big five” (Amazon, Facebook, Apple, Google and Microsoft). Although these companies are in favor of the open development of AI, and some of their AI development platforms and offer open source, full accessibility this is not guaranteed. AI infrastructure for the EU, possibly even with a European AI certification or labeling, can not only help promote the development of responsible and sustainable AI but also the EU competitive advantage.

Livestream: Committee to take stance in the European debate on artificial intelligence

524th plenary session, Main debating chamber, European Parliament. Credit: EESC

Today, the European Economic and Social Committee (EESC) is going to debate its stance in the European discussion on AI and will express conflicting views on certain issues, especially on the question of legal personality for robots. The report, which has been drawn up by a Dutch rapporteur, Ms Catelijne Muller, member of the Workers’ Group, will be debated at the EESC’s plenary in Brussels on 31 May.

Click here to watch the livestream. Live coverage will begin at 14:30 with the debate on AI at 16:00 CEST.

You can also download and read referral related documents about the consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society here.

From the EESC website:

“Artificial Intelligence (AI) technologies offer the potential for creating new and innovative solutions to improve people’s lives, grow the economy, and address challenges in health and wellbeing, climate change, safety and security. Like any disruptive technology, however, AI carries risks and presents complex societal challenges in several areas such as labour, safety, privacy, ethics, skills and so on.

A broad approach towards AI, covering all its effects (good and bad) on society as a whole, is crucial. Especially in a time where developments are accelerating.”

RoboCup video series: Rescue league

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.

To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.

Robotics isn’t only about playing soccer it’s also about helping people. This week, we take a look at what it takes to be part of RoboCupRescue. You’ll hear about the history and ambitions of RoboCup from the trustees, and inspiring teams from around the world.

Short version:

Long version:

Can’t wait to watch the rest? You can view all the videos on the RoboCup playlist below:

https://www.youtube.com/playlist?list=PLEfaZULTeP_-bqFvCLBWnOvFAgkHTWbWC

Please spread the word! and if you would like to join a team, check here for more information.

f you liked reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Faster, more nimble drones on the horizon

Engineers at MIT have come up with an algorithm to tune a Dynamic Vision Sensor (DVS) camera, simplifying a scene to its most essential visual elements and potentially enabling the development of faster drones. Image: Jose-Luis Olivares/MIT

There’s a limit to how fast autonomous vehicles can fly while safely avoiding obstacles. That’s because the cameras used on today’s drones can only process images so fast, frame by individual frame. Beyond roughly 30 miles per hour, a drone is likely to crash simply because its cameras can’t keep up.

Recently, researchers in Zurich invented a new type of camera, known as the Dynamic Vision Sensor (DVS), that continuously visualizes a scene in terms of changes in brightness, at extremely short, microsecond intervals. But this deluge of data can overwhelm a system, making it difficult for a drone to distinguish an oncoming obstacle through the noise.

Now engineers at MIT have come up with an algorithm to tune a DVS camera to detect only specific changes in brightness that matter for a particular system, vastly simplifying a scene to its most essential visual elements.

The results, which they presented at the IEEE American Control Conference in Seattle, can be applied to any linear system that directs a robot to move from point A to point B as a response to high-speed visual data. Eventually, the results could also help to increase the speeds for more complex systems such as drones and other autonomous robots.

“There is a new family of vision sensors that has the capacity to bring high-speed autonomous flight to reality, but researchers have not developed algorithms that are suitable to process the output data,” says lead author Prince Singh, a graduate student in MIT’s Department of Aeronautics and Astronautics. “We present a first approach for making sense of the DVS’ ambiguous data, by reformulating the inherently noisy system into an amenable form.”

Singh’s co-authors are MIT visiting professor Emilio Frazzoli of the Swiss Federal Institute of Technology in Zurich, and Sze Zheng Yong of Arizona State University.

Taking a visual cue from biology

The DVS camera is the first commercially available “neuromorphic” sensor — a class of sensors that is modeled after the vision systems in animals and humans. In the very early stages of processing a scene, photosensitive cells in the human retina, for example, are activated in response to changes in luminosity, in real time.

Neuromorphic sensors are designed with multiple circuits arranged in parallel, similarly to photosensitive cells, that activate and produce blue or red pixels on a computer screen in response to either a drop or spike in brightness.

Instead of a typical video feed, a drone with a DVS camera would “see” a grainy depiction of pixels that switch between two colors, depending on whether that point in space has brightened or darkened at any given moment. The sensor requires no image processing and is designed to enable, among other applications, high-speed autonomous flight.

Researchers have used DVS cameras to enable simple linear systems to see and react to high-speed events, and they have designed controllers, or algorithms, to quickly translate DVS data and carry out appropriate responses. For example, engineers have designed controllers that interpret pixel changes in order to control the movements of a robotic goalie to block an incoming soccer ball, as well as to direct a motorized platform to keep a pencil standing upright.

But for any given DVS system, researchers have had to start from scratch in designing a controller to translate DVS data in a meaningful way for that particular system.

“The pencil and goalie examples are very geometrically constrained, meaning if you give me those specific scenarios, I can design a controller,” Singh says. “But the question becomes, what if I want to do something more complicated?”

Cutting through the noise

In the team’s new paper, the researchers report developing a sort of universal controller that can translate DVS data in a meaningful way for any simple linear, robotic system. The key to the controller is that it identifies the ideal value for a parameter Singh calls “H,” or the event-threshold value, signifying the minimum change in brightness that the system can detect.

Setting the H value for a particular system can essentially determine that system’s visual sensitivity: A system with a low H value would be programmed to take in and interpret changes in luminosity that range from very small to relatively large, while a high H value would exclude small changes, and only “see” and react to large variations in brightness.

The researchers formulated an algorithm first by taking into account the possibility that a change in brightness would occur for every “event,” or pixel activated in a particular system. They also estimated the probability for “spurious events,” such as a pixel randomly misfiring, creating false noise in the data.

Once they derived a formula with these variables in mind, they were able to work it into a well-known algorithm known as an H-infinity robust controller, to determine the H value for that system.

The team’s algorithm can now be used to set a DVS camera’s sensitivity to detect the most essential changes in brightness for any given linear system, while excluding extraneous signals. The researchers performed a numerical simulation to test the algorithm, identifying an H value for a theoretical linear system, which they found was able to remain stable and carry out its function without being disrupted by extraneous pixel events.

“We found that this H threshold serves as a ‘sweet-spot,’ so that a system doesn’t become overwhelmed with too many events,” Singh says. He adds that the new results “unify control of many systems,” and represent a first step toward faster, more stable autonomous flying robots, such as the Robobee, developed by researchers at Harvard University.

“We want to break that speed limit of 20 to 30 miles per hour, and go faster without colliding,” Singh says. “The next step may be to combine DVS with a regular camera, which can tell you, based on the DVS rendering, that an object is a couch versus a car, in real time.”

This research was supported in part by the Singapore National Research Foundation through the SMART Future Urban Mobility project.

Talking machines: Graphons and “inferencing”

In episode two of season three Neil takes us through the basics on dropout, we chat about the definition of inference (It’s more about context than you think!) and hear an interview with Jennifer Chayes of Microsoft.

If you liked this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

How a challenging aerial environment sparked a business opportunity

We develop the fastest, smallest and lightest distance sensors for advanced robotics in challenging environments. These sensors are born from a fruitful collaboration with CERN while developing flying indoor inspection systems.

How we began started with a challenge: the European Centre for Nuclear Research (CERN) asked if we could use drones to perform fully autonomous inspections within the tunnel of the Large Hadron Collider. Now if you haven’t seen it, it’s a complex environment; perhaps one of the most unfriendly environments imaginable for fully autonomous drone flight. But we accepted the mission, rolled-up our sleeves, and got to work. As you can imagine, the mission was very challenging! 

Large Hadron Collider tunnel. Credit: CERN

One of the main issues we faced was finding suitable sensors to place on the drone for navigation and anti-collision. We got everything on the market that we could find and tried to make it work. Ultrasound was too slow and the range too short. Lasers tend to be too big, too heavy and consumed too much power. Monocular and stereo vision was highly complex and placed a huge computational burden on the system and even then was prone to failure. It became clear that what we really needed, simply didn’t exist! That’s how the concept for TeraRanger’s brand of sensors was born.

Having failed to make any of the available sensing technologies work at the performance levels required, we came to the conclusion that we would need to build the sensors from the ground up. It wasn’t easy (and still isn’t) but finally, we had something small enough, light enough (8g), with fast refresh rates and enough range to work well on the drone. Leading academics in robotics could see potential using the sensor and wanted some for themselves, then more people wanted them, and before too long we had a new business.

Millimetre precision wasn’t vital for the drone application, but the high refresh rates and range were. And by not using a laser emitter we were able to give the sensor a 3 degree field of view, which for many applications proved to be a real boon, giving a smoother flow of data when faced with uneven surfaces and complex and cluttered environments. It also enabled the sensor to be fully eye-safe and the supply current to remain low.

 

Plug and play multi-axis sensing

Knowing that we would often need to use multiple sensors at the same time, we also designed-in support for multi-sensor, multi-axis requirements. Using a ‘hub’ we can simultaneously connect up to eight sensors to provide a simple to use, plug and play approach to multi-sensor applications. By controlling the sequence in which sensors are fired (along with some other parameters) we are able to limit or eliminate, the potential for sensor cross-talk and then stream an array of calibrated distance values in millimetres, and at high speed. From a user’s’ perspective this is about as simple as it gets since the hub also centralises power management.

TeraRanger Tower

There’s no need to get in a spin

Using that same concept we continued to push the boundaries. A significant evolution has been our approach to LiDAR scanning – not just from a hardware point of view (although that is also different) but from a conceptual approach too. We’ve taken the same philosophy of small size, lightweight sensors with very high refresh rates (up to 1kHz) and applied that to create a new style of static LiDAR. Rather than rotating a sensor or using other mechanical methods to move a beam, TeraRanger Tower simultaneously monitors eight axis (or more if you stack multiple units together) and streams an array of data at up to 270Hz!

Challenging the point-cloud conundrum

With no motors or other moving parts, the hardware itself has many advantages, being silent, lightweight and robust, but there is also a secondary benefit from the data. Traditional thinking amongst the robotics community is that to perform navigation, Simultaneous Localisation and Mapping (SLAM) and collision avoidance you have to “see” everything around you. Just as we did at the start of our journey, people focus on complex solutions – like stereo vision – gathering millions of data points which then requires complex and resource-hungry processing. The complexity of the solution – and of the algorithms – has the potential to create many failure modes. Having discovered for ourselves that the complicated solution is not always necessary, our approach is different in that, we monitor fewer points. But,  we monitor them at very fast refresh rates to ensure that what we think we see, is really there. As a result, we build less intense point clouds, but with very reliable data. This then requires less complex algorithms and processing and can be done with lighter-weight computing. The result is a more robust, and potentially safer solution, especially when you can make some assumptions about your environment, or harness odometry data to augment the LiDAR data. Many times we were told you could never do SLAM monitoring on just eight points, but we proved that wrong.

Coming full circle: There are no big problems, just a lot of little problems

All of this leads back to our original mission. We’ve not solved it yet, but recently we mounted TeraRanger Tower to a drone and proved, for the first time we believe, that a static LiDAR can be used for drone anti-collision. The Proof of Concept was quickly put together to harness code developed for the open source APM 3.5 flight controller, with Terabee writing drivers to hook into the codebase. Anti-collision is just one step in the journey to fully autonomous drone flight and we are still on the wild-ride of technology, but definitely, we are taming the beast!

If you have expertise in drone collision avoidance and wish to help us overcome the remaining challenges, please contact us at teraranger@terabee.com. For more information about Terabee and our TeraRanger brand of sensors, please visit our website.

Page 427 of 433
1 425 426 427 428 429 433