Category Robotics Classification

Page 426 of 429
1 424 425 426 427 428 429

RoboCup video series: @Home league

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. Established in 1997, the original mission was to field a team of robots capable of winning against the human soccer World Cup champions by 2050.

To celebrate 20 years of RoboCup, the Federation is launching a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan.

This week, we consider being part of the RoboCup@Home league. Robots helping at home can certainly ‘feel’ like the future. One day, these robots might help with various tasks around the house. You’ll hear about the history and ambitions of RoboCup from the trustees, and inspiring teams from around the world.

Short version:

Long version:

Want to watch the rest? You can view all the videos on the RoboCup playlist below:

https://www.youtube.com/playlist?list=PLEfaZULTeP_-bqFvCLBWnOvFAgkHTWbWC

Please spread the word! and if you would like to join a team, check here for more information.

 

If you liked reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

China’s strategic plan for a robotic future is working: 500+ Chinese robot companies

In 2015, after much research, I wrote about China having 194 robot companies and used screen shots of The Robot Report’s Global Map to show where they were and a chart to show their makeup. We’ve just concluded another research project and have added hundreds of new Chinese companies to the database and global map.

Why is China so focused on robots?

China installed 90,000 robots in 2016, 1/3 of the world’s total and a 30% increase over 2015. Why?

Simply said, China has three drivers helping them move toward country-wide adoption of robotics: scale, growth momentum, and money. Startup companies can achieve scale quickly because the domestic market is so large. Further, companies are under pressure to automate thereby causing double-digit demand for industrial robots (according to the International Federation of Robotics). Third, the government is strongly behind the move.

Made in China 2025 and 5-Year Plans

Chinese President Xi Jinping has called for “a robot revolution” and initiated the “Made in China 2025” program. More than 1,000 firms and a new robotics association, CRIA (Chinese Robotics Industry Alliance) have emerged (or begun to transition) into robotics to take advantage of the program, according to a 2016 report by the Ministry of Industry and Information Technology. By contrast, according to the same report, the sector was virtually non-existent a decade ago.

Under “Made in China 2025,” and the five-year robot plan launched last April, Beijing is focusing on automating key sectors of the economy including car manufacturing, electronics, home appliances, logistics, and food production. At the same time, the government wants to increase the share of in-country-produced robots to more than 50% by 2020; up from 31% last year.

Robot makers, and companies that automate, are both eligible for subsidies, low-interest loans, tax waivers, rent-free land and other incentives. One such program lured back Chinese engineers working overseas; another oversaw billions of dollars poured into technology parks dedicated to robotics production and related businesses; another encouraged local governments to help regional companies deploy robots in their production processes; and despite its ongoing crackdown on capital outflows, green lights have been given to Chinese companies acquiring Western robotics technology companies.

Many of those acquisitions were reported by The Robot Report during 2016 and are reflected (with little red flags) in the chart reporting the top 15 acquisitions of robotic-related companies:

  1. Midea, a Chinese consumer products manufacturer, acquired KUKA, one of the Big 4 global robot manufacturers
  2. The Kion Group, a predominately Chinese-funded warehousing systems and equipment conglomerate, acquired Dematic, a large European AGV and material handling systems company
  3. KraussMaffei, a big German industrial robots integrator, was acquired by ChemChina
  4. Paslin, a US-based industrial robot integrator, was acquired by Zhejiang Wanfeng Technology, a Chinese industrial robot integrator

China has set goals to be able to make 150,000 industrial robots in 2020; 260,000 in 2025; and 400,000 by 2030. If achieved, the plan should help generate $88 billion over the next decade. China’s stated goal in both their 5-year plan and Made in China 2025 program is to overtake Germany, Japan, and the United States in terms of manufacturing sophistication by 2049, the 100th anniversary of the founding of the People’s Republic of China. To make that happen, the government needs Chinese manufacturers to adopt robots by the millions. It also wants Chinese companies to start producing more of those robots.​

Analysts and Critics

Various research reports are predicting that more than 250,000 industrial pick and place, painting and welding robots will be purchased and deployed in China by 2019. That figure represents more than the total global sales of all types of industrial robots in 2014!

Research firms predicting dramatic growth for the domestic Chinese robotics industry are also predicting very low-cost devices. Their reports are contradicted by academics, roboticists and others who point out that there are so many new robot manufacturing companies in China that none will be able to manufacture many thousand robots per year and thus benefit from scale. Further, many of the components that comprise a robot are intricate and costly, e.g., speed reducers, servo motors and control panels. Consequently these are purchased from Nabtesco, Harmonic Drive, Sumitomo and other Japanese, German and US companies. Although a few of the startups are attempting to make reduction gears and other similar devices, the lack of these component manufacturers in China may put a cap on how low costs can go and on how much can be done in-country for the time being.

“We aim to increase the market share of homegrown servomotors, speed reducers and control panels in China to over 30 percent by 2018 or 2019,” said Qu Xianming, an expert with the National Manufacturing Strategy Advisory Committee, which advises the government on plans to upgrade the manufacturing sector. “By then, these indigenous components could be of high enough quality to be exported to foreign countries,” Qu said in an interview with China Daily. “Once the target is met, it will lay down a strong foundation for Chinese parts makers to expand their presence.”

Regardless, China, with governmental directives and incentives, has become both the world’s biggest buyer of robots and also is growing a very large in-country industry to make and sell robots of all types.

 

The Robot Report now has over 500 Chinese companies in its online directories and on its Global Map

The Robot Report and its research team have been able to identify over 500 companies that make or are directly involved in making robots in China. The CRIA (China Robot Industry Alliance), and other sources, proffer the number to be closer to 800. The Robot Report is limited by our own research capabilities, language translation limitations, and scarcity of information about robotics companies and their websites and contact people in China.

These companies are combined with other global companies – now totaling over 5,300 – in our online directories and plotted on our global map so that you can research by area. You can explore online and filter in a variety of ways.

Use Google’s directional and +/- markers to navigate, enlarge, and hone in on a geographical area of interest (or double click near where you want to enlarge). Click on one of the colored markers to get a pop-up window with the name, type, focus, location and a link to the company’s website.

[NOTE: the map shows a single entry for the company headquarters regardless how many branches, subsidiaries and factory locations that company might have, consequently international companies with factories and service centers in China won’t appear. Further note that The Robot Report’s database doesn’t contain companies that just use robots; it focuses on those involved in making robots.]

The Filter pull-down menu lets you choose any one of the seven major categories:

  1. Industrial robot makers
  2. Service robots used by corporations and governments
  3. Service robots for personal and private use
  4. Integrators
  5. Robotics-related start-up companies
  6. Universities and research labs with special emphasis on robotics
  7. Ancillary businesses providing engineering, software, components, sensors and other products and services to the industry.

In the chart below, 500 Chinese companies are tabulated by their business type and area of focus. Please note that your help would be greatly appreciated by adding to the map and making it as accurate and up-to-date as possible. Please send robotics-related companies that we have missed (or are new) to info@therobotreport.com.

 

Localization uncertainty-aware exploration planning

Autonomous exploration and reliable mapping of unknown environments corresponds to a major challenge for mobile robotic systems. For many important application domains, such as industrial inspection or search and rescue, this task is further challenged from the fact that such operations often have to take place in GPS-denied environments and possibly visually-degraded conditions.

Source: Dr Kostas Alexis, UNR

In this work, we move away from deterministic approaches on autonomous exploration and we propose a localization uncertainty-aware autonomous receding horizon exploration and mapping planner verified using aerial robots. This planner follows a two-step optimization paradigm. At first, in an online computed random tree the algorithm finds a finite-horizon branch that optimizes the amount of space expected to be explored. The first viewpoint configuration of this branch is selected, but the path towards it is decided through a second planning step. Within that, a new tree is sampled, admissible branches arriving at the reference viewpoint are found and the robot belief about its state and the tracked landmarks of the environment is propagated. The branch that minimizes the expected localization uncertainty is selected, the corresponding path is executed by the robot and the whole process is iteratively repeated.

The algorithm has been experimentally verified with aerial robotic platforms equipped with a stereo visual-inertial system operating in both well-lit and dark conditions, as shown in our videos:

To enable further developments, research collaboration and consistent comparison, we have released an open source version of our localization uncertainty-aware exploration and mapping planner, experimental datasets and interfaces. To get the code, please visit: https://github.com/unr-arl/rhem_planner

This research was conducted at the Autonomous Robots Lab of the University of Nevada, Reno.

Reference:

Christos Papachristos, Shehryar Khattak, Kostas Alexis, “Uncertainty-aware Receding Horizon Exploration and Mapping using Aerial Robots,” IEEE International Conference on Robotics and Automation (ICRA), May 29-June 3, 2017, Singapore
If you liked this article, you may also want to read:

Looking at new trends in Distributed Robotics Systems and Society

Figure 1: A distributed robotic system managing the logistics of a warehouse.

It isn’t a secret that distributed robotic systems are starting to revolutionize many applications from targeted material delivery (e.g., Amazon Robotics) to precision farming. Assisted by technological advancements such as cloud computing, novel hardware design, and manufacturing techniques, nowadays distributed robot systems are starting to become an important part of industrial activities including warehouse logistics or autonomous transportation.

However, as many engineers and scientists in this field know, several of the heterogeneous characteristics of these systems that make them ideal for certain future applications — robot autonomy, decentralized control, collective emergent behavior, collective learning, knowledge sharing, etc. — hinder the evolution of the technology from academic institutions to the public sphere. For instance, controlling the motion and behavior of large teams of robots still presents unique challenges for human operators, who cannot yet effectively convey their high-level intentions in application. Moreover, robots collaborating through the cloud might find difficulties applying shared knowledge due to physical hardware differences. Solutions to these issues might be necessary steps towards mainstream adoption.

Figure 2: Different types of robots share a blockchain communication channel using their public keys
as main identifiers.

In response to these challenges, new lines of research propose innovative synergies to tackle the current problems existing in the field. For instance, the inclusion of wearable and gaming technologies to reduce the complexity of controlling a robotic swarm by human operators or, using blockchain-based models to create new consensus and business models for large teams of robots.

In order to understand the current state of the art of the distributed robotic systems field and foresee its breakthroughs, the International Journal of Advanced Robotic Systems decided to launch a special issue titled “Distributed Robotic Systems and Society”. This special issue seeks to move beyond the classical view of distributed robotic systems to advance our understanding about the future role of these systems in the marketplace and public society. Insights to unasked questions in the field are especially suited to this issue. For instance, what security methods are available and are efficient for these systems? What kind of distributed robotic algorithms are suitable to introduce human-oriented interactions? Are there new interfaces to connect with these systems or reduce their complexity? Are distributed networks such as Bitcoin a feasible way to integrate distributed robotic systems in our society? Are there new business models for distributed robot ventures? How can distributed robotic systems make use of unlimited access information in the cloud?

We also welcome submissions on other topics addressing multi-robot systems in the society. We seek papers with conceptual and theoretical contributions as well as papers documenting valuable results of experiments conducted with real-robots. Finally, the editorial team of this special issue (Dr. Penaloza, Dr. Hauert, and myself) would like to encourage researchers and scientists to submit their manuscripts. We are confident that the ideas, methods, and results included in this special issue will assist the scientific community as well as the industry to reach new horizons in the field of distributed robotics systems.

The Drone Center’s Weekly Roundup: 6/5/17

A German Heron 1 UAV. Credit: Airbus Defence and Space

May 29, 2017 – June 4, 2017

News

A German court has thrown out a protest filed by U.S. drone maker General Atomics Aeronautical Systems over the German military’s decision to acquire the Israel Aerospace Industries Heron TP drone over the U.S. firm’s Reaper. The decision by the Oberlandesgericht, the country’s high court, allows the Bundeswehr to proceed with the planned acquisition of five Heron TPs, a medium-altitude long-endurance surveillance and reconnaissance drone. (DefenseNews)

Commentary, Analysis, and Art

At War on the Rocks, Jonathan Gillis argues that the U.S. military is not prepared for a future filled with enemy drones.

Also at War on the Rocks, Michael Horowitz considers the ways in which emerging technologies will shape how the U.S. military fights in future conflicts.

In a series of articles at Breaking Defense, Sydney J. Freedberg Jr. looks at how the U.S. military is integrating artificial intelligence into its operations.  

At Defense One, Patrick Tucker writes that Poland is planning to invest in small, lethal drones rather than large unmanned systems like the Reaper.

A study by the University of Washington found delivery drones tend to produce fewer carbon emissions than trucks when traveling short distances. (UW Today)

At TechCrunch, Brian Heater argues that the DJI Spark foldable drone is not quite the “mainstream” drone that it was made out to be.

At Popular Mechanics, Joe Pappalardo considers the role that small, disposable drones will play in the future of warfare.

At ArsTechnica, Sean Gallagher recalls his role in the U.S. Navy’s early experiments with unmanned aircraft.

At the Montreal Gazette, Marc Garneau discusses the potential threat posed by drones to aircraft.

At the Washington Post, Michael Laris considers what is likely to happen in the wake of the federal appeals court’s decision to strike down the FAA’s drone registration rule for hobbyists.

At the Jamestown Foundation, Tobias J. Burgers and Scott N. Romaniuk look at how al-Qaeda learned to adapt to U.S. drone strikes.

The New York Times reports that a former head of the CIA’s drone program will now lead the agency’s Iran operations.

A paper in Remote Sensing offers a survey of the different systems and methods for using drones for marine mammal research. (MDPI)

A paper in Frontiers in Plant Science compares the use of drones and satellite images in monitoring plant invasions.
In Critical Studies in Security, Katharine Hall Kindervater argues that targeted killings are best understood within the context of a shift towards lethal surveillance.

In the Air & Space Power Journal, Lt. Col. Thomas S. Palmer and Dr. John P. Geis II argue that effective counter-drone weapons will be essential in future conflicts.

In the Naval War College Review, Jeffrey E. Kline considers how to effectively integrate robotics into the fleet while recognizing fiscal constraints.

Know Your Drone

Researchers at the Charles Stark Draper Laboratory and Howard Hughes Medical Institute have created a system that turns live dragonflies into steerable drones. (Gizmodo)

A team at the University of Sherbrooke has developed a solar-powered drone than can autonomously land on lakes to recharge its batteries. (IEEE Spectrum)

U.S. firm Drone Aviation Holding Corp. unveiled an automated winch tethering system for DJI Inspire commercial multirotor drones. (Unmanned Systems Technology)

Amazon has been awarded a patent for a shipping label that doubles as a parachute for items delivered by drone. (GeekWire)

Meanwhile, Walmart has been awarded a patent for a system that uses blockchain technology to keep track of delivery drones. (CoinDesk)

The Office of Naval Research is developing a drone that can detect buried mines for use in amphibious landings. (Shephard Media)

Apple announced that its education programming App, Swift Playgrounds, will soon support code-writing for robots and drones. (The Verge)  

Belorussian firms presented a range of new military unmanned aircraft at the MILEX 2017 exhibition in Minsk, including the Belar YS-EX, a medium-altitude long-endurance system. (IHS Jane’s 360)

In a test flight, China’s Caihong solar-powered ultra long-endurance drone reached an altitude of 65,000 feet. (The Sun)

Defense firm Lockheed Martin successfully completed a beyond line of sight pipeline inspection operation with its Indago 2 commercial multirotor drone. (Unmanned Aerial Online)

Robot maker SMP Robotics unveiled the S5 Security Robot, an unmanned ground vehicle. (Unmanned Systems Technology)

In a test, the U.S. Army used two Raytheon Stinger anti-aircraft missiles to intercept two drones. (Press Release)

The U.S. Special Operations Command has completed testing for its Joint Threat Warning System sensor for the Puma hand-launched tactical drone. (Shephard Media)

Aerospace firm Russian Helicopters is developing a fixed-wing vertical take-off and landing drone. (Shephard Media)

The U.S. Army is planning to test its autonomous trucks on a public highway in Michigan later this month. (Voice of America)

Texas Instruments has designed two circuit-based subsystems that it claims could increase the efficiency of battery-powered drones. (Drone Life)

Drones at Work

In a proof of concept demonstration, Drone Dispatch delivered a box of donuts to a customer by drone in Colorado. (CNET)

The organizers of the Torbay Airshow in the U.K. banned the use of drones at the event. (Devon Live)

Police in Snellville, Georgia are investigating various reports of a drone being used to spy on residents in their homes. (WSBTV)

Officers from the Stafford County Sheriff’s Office in Virginia used a drone to find an armed suspect. (WTOP)

The Albany County Sheriff’s Office in New York has acquired a drone for a range of operations. (Albany Times Union)

North Dakota’s governor has established a task force to support the development of counter-drone technologies. (Press Release)

The Israel Defense Forces is equipping infantry, border defense, and combat intelligence corps units with DJI drones. (Times of Israel)

A Russian bank plans to begin using drones to deliver cash to customers. (Forbes)

Industry Intel

Speaking at the Code Conference, Intel CEO Brian Krzanich said that Intel will not develop a consumer drone. (Recode)

Snap, the social media company that owns Snapchat, has acquired Ctrl Me Robotics, a California-based drone startup. (Buzzfeed)

The Australian Ministry of Defense announced that it will invest $75 million in small unmanned aircraft systems, including the AeroVironment Wasp AE, for the Australian Army. (Press Release)

The Netherlands Ministry of Defence selected the Insitu Integrator to replace the Insitu ScanEagle. (Unmanned Systems Technology)

The U.S. Army awarded Six3 Advanced Systems a $10.5 million contract to design and develop a prototype for a squad of human and unmanned assets. (DoD)

Three German firms, ESG Elektroniksystem und Logistik, Diehl Defence and Rohde & Schwarz, have partnered to market the Guardion counter-drone solution. (Shephard Media)

2G Robotics will provide the laser scanning, imaging, and illumination systems for the Norwegian Navy’s Kongsberg Maritime Hugin autonomous underwater vehicles. (Shephard Media)

France’s Direction Générale de l’Armement will take delivery of the Thales Spy’Ranger mini drone beginning in late 2018. (IHS Jane’s Defence Weekly)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Robohub Digest 05/17: RoboCup turns 20, ICRA in Singapore, robot inspector helps check bridges

A quick, hassle-free way to stay on top of robotics news, our robotics digest is released on the first Monday of every month. Sign up to get it in your inbox.

20 years of RoboCup

20 years in the books! RoboCup, which first started in 1997, was originally established to bring forth a team of robots that could beat the human soccer World Cup champions. Twenty years on, RoboCup is so more than just a soccer competition. In fact, the competition has grown into an international movement with a variety of leagues. Teams compete against each other in four different leagues and many sub-competitions, including home, work, and rescue missions. The complexity of missions in RoboCup requires intelligent, dynamic, sensing robots that can react to chaotic and changing environments. And in its 20-year history, the competition has brought forth numerous winners who have gone on to achieve great things.

Without robot competitions like RoboCup, the field of robotics wouldn’t be where it is today. So to celebrate 20 years of RoboCup, the Federation launched a video series featuring each of the leagues with one short video for those who just want a taster, and one long video for the full story. Robohub will be featuring one league every week leading up to RoboCup 2017 in Nagoya, Japan. You can watch all videos from the playlist here.

ICRA 2017 in Singapore

While we have RoboCup 2017 to look forward to, the IEEE 2017 International Conference on Robotics and Automation (ICRA) took place in Singapore. Under the conference theme “Innovation, Entrepreneurship, and Real-world Solutions”, the event brought together engineers, researchers, entrepreneurs and industry to address some of the major challenges of our times.

This year’s keynote speakers included Louis Phee, who described development and implementation of a surgical robot called EndoMaster, and Peter Luh, giving an overview of Industry 4.0. Alongside the speeches, the event included 25 workshops and tutorials, various exhibitions, as well as four Robot Challenges.

Self-driving cars: Competition heats up

May wasn’t just events and competitions. In the world of autonomous vehicles, French Groupe PSA, the second largest car manufacturer in Europe which owns brands such as Peugeot and Citroen, have teamed up with autonomous car maker nuTonomy. The collaboration will seek to build a self-driving Peugeot 3008, which they hope will hit the roads of Singapore in the not-too-distant future.

Groupe PSA and other well-known car manufacturers, including Ford Motors, who are just starting their bid to enter the autonomous car market, are lagging way behind Waymo (the self-driving car developed by Google’s parent Alphabet) when it comes to putting their cars on actual roads. And with Waymo about to team up with ride-hailing start-up Lyft, Google is getting ever closer to making their autonomous car part of mainstream traffic.

Another player in the self-driving car game we haven’t mentioned yet is cab service Uber. The company is locked in a dispute with Google over allegedly stolen design secrets and will likely be going to public trial later in the year. Linked to the lawsuit, Uber fired the engineer at the heart of the dispute, Anthony Levandowski, who is suspected of having stolen company secrets when he left Google and founded his own company, Otto, which was acquired by Uber last year. The information war continues.

Self-driving cars: Innovation

Meanwhile, a group of veterans previously linked with Alphabet founded their own company – DeepMap Inc. – which aims to develop systems that allow cars to navigate complex cityscapes.

It’s not just cityscapes that are complex to navigate. Considering other vehicles, cyclists, and pedestrians on busy roads form part of the difficult environment a self-driving car has to cope with due to its unpredictable nature. It will, therefore, become necessary for autonomous cars to understand and predict behaviour. Here, machine learning will be key, as Dr Nathan Griffiths explains in The Royal Society’s blog on machine learning in research.

And with so much interest and research into autonomous, intelligent cars, it’s not surprising that some, like Chris Urmson in a recent lecture at CMU, predict we will see a shift from the traditional transport model where people own their cars to a more dynamic, responsive model of “Transportation as a Service”, in which, companies own fleets of cars that can be used by anyone when and where they’re needed.

Robots in the fields

Much of the innovations that have enabled autonomous cars are transferable to industrial, commercial and agricultural vehicles. In the case of the latter, self-driving tractors and precision agribots have already increased productivity and made 24-hour autonomous, high-yield farming a possibility.

In Salinas Valley, California, robots are already used to pick lettuce, and to help vineyard owners decide when they need to water their plants. And GV (formerly Google Ventures) just invested $10 million into Abundant Robotics to build a picking robot, initially to pick apples but with potential for adaptation to support the harvest of other fruit.

It is believed the uptake of farming technologies is not progressing as quickly as it could be, due to farmers being slow to accept precision agriculture products, software, equipment and practices. But the farming (r)evolution is well underway, as agricultural robotics are already a 3 billion dollar industry set to grow to an impressive 12 billion by 2026.

Robots in the skies

While robots are still waiting to be fully accepted in agriculture, it’s no secret that the US military uses drones extensively. What came as a surprise was the strange video feed that surfaced in May from what appeared to be a drone flying over Florida’s panhandle, apparently sponsored by the National Reconnaissance Office – a body that doesn’t usually publicise its drone-related activities. The footage, now believed to have been shot in February, was likely uploaded as a demo video by a contractor.

While the Florida video caused quite a bit of confusion this month, another drone was making headlines for very different reasons. Previously used in military operations, the ArcticShark (a modified version of the military TigerShark) drone is now helping to fight climate change. And it’s now on a scientific mission to help scientists understand cloud formation and other atmospheric processes.

In other drone-related news, the Alaska Department of Transportation and Public Facilities has allowed some of its employees to receive licenses to operate drones to support projects involving roads, bridged and other structures. And a report has shown that drone funding fell by 64% in 2016 compared to 2015, with DJI dominating the market and issues such as battery life, connectivity problems and drone regulations stifling development efforts.

Robots in the water

From the skies to the sea: this May, NATO Nations agreed to use JANUS for their digital underwater communications. JANUS has the potential to make military and civilian, NATO and non-NATO, devices fully interoperable, doing away with the communication problems between systems and models by different manufacturers that have made underwater communication difficult up to this point.

Robots in the lab

Researchers at the Institute for Human and Machine Cognition in Pensacola, Florida, have developed a two-legged robot that can run without using sensors and a computer. The robot, called Planar Elliptical Runner, is stable through its physical design, which makes it different to other two or four-legged robots.

Meanwhile, a team at the University of California, Berkeley, has come up with a nimble-fingered robot that is able to pick up a wide range of objects using a 3D sensor and a deep learning neural network. It may not be perfect, but it’s the most nimble-fingered robot yet and the technology may find applications in picking and manufacturing in future.

Human-Machine Interaction

Most of the robots we interact with are practical helpers, offering support in the home, on the road or at work. Innovations some of us may have interacted with are autonomous lawn mowers, cars with self-driving features, or Amazon’s Alexa. With there plenty more robots in the pipeline.

A four-wheeled, waterproof, battery-powered robot inspector developed by a team in Nevada, may soon be supporting civil engineers and safety inspectors with bridge checks to reduce the chance of human error and omissions that could lead to a collapse such as the I-35W bridge disaster in Minnesota on 2007.

And finally, engineers in Germany have built a robot priest called BlessU-2 that can beam light from its hands and deliver blessings in five languages. The robot is meant to spark discussions about faith, the church and the potential of AI.

Learning resources: Robot Academy

And to finish off our digest for May, we wanted to highlight a new open online resource for robotics education: the Robot Academy. Developed by Professor Peter Corke and the Queensland University of Technology (QUT), the Academy offers more than 200 lessons from robot joint control architecture to limits of electric motor performance. So if you find yourself with a bit of time on your hands, why not try a Robot Academy Masterclass?

Or if you’re after something a little bit different, there’s a new toolkit on “Computational Abstractions for Interactive Design of Robotic Devices” that allows you to drag and drop parts of a virtual robot on screen without the need to know exactly what needs to connect to what as a complex physics engine ensures your robot won’t fail or fall over.

Missed any previous Digests? Click here to check them out.

Upcoming events for June – July 2017

Intelligent Ground Vehicle Competition: June 2-5, Rochester, MI.

CES Asia: June 7-9, Shanghai, China.

Unmanned Cargo Ground Vehicle Conference: June 13-14, Maaspoort, Venlo, The Netherlands.

Autonomous machines world: June 26-27, Berlin, Germany.

RoboUniverse: June 28-30, Seoul.

CIROS: July 5, 2017 – July 8, 2017, Shanghai, China. 

ASCEND Conference and Expo: July 19, 2017 – July 21, 2017, Portland, OR 

RoboCup: July 25, 2017 – July 31, 2017, Nagoya, Japan 

Call for papers:

1st Annual Conference on Robot Learning (CoRL 2017): Call for papers deadline 28 June.

Giving robots a sense of touch

A GelSight sensor attached to a robot’s gripper enables the robot to determine precisely where it has grasped a small screwdriver, removing it from and inserting it back into a slot, even when the gripper screens the screwdriver from the robot’s camera. Photo: Robot Locomotion Group at MIT

Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface.

Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.

In one paper, Adelson’s group uses the data from the GelSight sensor to enable a robot to judge the hardness of surfaces it touches — a crucial ability if household robots are to handle everyday objects.

In the other, Russ Tedrake’s Robot Locomotion Group at CSAIL uses GelSight sensors to enable a robot to manipulate smaller objects than was previously possible.

The GelSight sensor is, in some ways, a low-tech solution to a difficult problem. It consists of a block of transparent rubber — the “gel” of its name — one face of which is coated with metallic paint. When the paint-coated face is pressed against an object, it conforms to the object’s shape.

The metallic paint makes the object’s surface reflective, so its geometry becomes much easier for computer vision algorithms to infer. Mounted on the sensor opposite the paint-coated face of the rubber block are three colored lights and a single camera.

“[The system] has colored lights at different angles, and then it has this reflective material, and by looking at the colors, the computer … can figure out the 3-D shape of what that thing is,” explains Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences.

In both sets of experiments, a GelSight sensor was mounted on one side of a robotic gripper, a device somewhat like the head of a pincer, but with flat gripping surfaces rather than pointed tips.

Contact points

For an autonomous robot, gauging objects’ softness or hardness is essential to deciding not only where and how hard to grasp them but how they will behave when moved, stacked, or laid on different surfaces. Tactile sensing could also aid robots in distinguishing objects that look similar.

In previous work, robots have attempted to assess objects’ hardness by laying them on a flat surface and gently poking them to see how much they give. But this is not the chief way in which humans gauge hardness. Rather, our judgments seem to be based on the degree to which the contact area between the object and our fingers changes as we press on it. Softer objects tend to flatten more, increasing the contact area.

The MIT researchers adopted the same approach. Wenzhen Yuan, a graduate student in mechanical engineering and first author on the paper from Adelson’s group, used confectionary molds to create 400 groups of silicone objects, with 16 objects per group. In each group, the objects had the same shapes but different degrees of hardness, which Yuan measured using a standard industrial scale.

Then she pressed a GelSight sensor against each object manually and recorded how the contact pattern changed over time, essentially producing a short movie for each object. To both standardize the data format and keep the size of the data manageable, she extracted five frames from each movie, evenly spaced in time, which described the deformation of the object that was pressed.

Finally, she fed the data to a , which automatically looked for correlations between changes in contact patterns and hardness measurements. The resulting system takes frames of video as inputs and produces hardness scores with very high accuracy. Yuan also conducted a series of informal experiments in which human subjects palpated fruits and vegetables and ranked them according to hardness. In every instance, the GelSight-equipped robot arrived at the same rankings.

Yuan is joined on the paper by her two thesis advisors, Adelson and Mandayam Srinivasan, a senior research scientist in the Department of Mechanical Engineering; Chenzhuo Zhu, an undergraduate from Tsinghua University who visited Adelson’s group last summer; and Andrew Owens, who did his PhD in electrical engineering and computer science at MIT and is now a postdoc at the University of California at Berkeley.

Obstructed views

The paper from the Robot Locomotion Group was born of the group’s experience with the Defense Advanced Research Projects Agency’s Robotics Challenge (DRC), in which academic and industry teams competed to develop control systems that would guide a humanoid robot through a series of tasks related to a hypothetical emergency.

Typically, an autonomous robot will use some kind of computer vision system to guide its manipulation of objects in its environment. Such systems can provide very reliable information about an object’s location — until the robot picks the object up. Especially if the object is small, much of it will be occluded by the robot’s gripper, making location estimation much harder. Thus, at exactly the point at which the robot needs to know the object’s location precisely, its estimate becomes unreliable. This was the problem the MIT team faced during the DRC, when their robot had to pick up and turn on a power drill.

“You can see in our video for the DRC that we spend two or three minutes turning on the drill,” says Greg Izatt, a graduate student in electrical engineering and computer science and first author on the new paper. “It would be so much nicer if we had a live-updating, accurate estimate of where that drill was and where our hands were relative to it.”

That’s why the Robot Locomotion Group turned to GelSight. Izatt and his co-authors — Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering; Adelson; and Geronimo Mirano, another graduate student in Tedrake’s group — designed control algorithms that use a computer vision system to guide the robot’s gripper toward a tool and then turn location estimation over to a GelSight sensor once the robot has the tool in hand.

In general, the challenge with such an approach is reconciling the data produced by a vision system with data produced by a tactile sensor. But GelSight is itself camera-based, so its data output is much easier to integrate with visual data than the data from other tactile sensors.

In Izatt’s experiments, a robot with a GelSight-equipped gripper had to grasp a small screwdriver, remove it from a holster, and return it. Of course, the data from the GelSight sensor don’t describe the whole screwdriver, just a small patch of it. But Izatt found that, as long as the vision system’s estimate of the screwdriver’s initial position was accurate to within a few centimeters, his algorithms could deduce which part of the screwdriver the GelSight sensor was touching and thus determine the screwdriver’s position in the robot’s hand.

“I think that the GelSight technology, as well as other high-bandwidth tactile sensors, will make a big impact in robotics,” says Sergey Levine, an assistant professor of electrical engineering and computer science at the University of California at Berkeley. “For humans, our sense of touch is one of the key enabling factors for our amazing manual dexterity. Current robots lack this type of dexterity and are limited in their ability to react to surface features when manipulating objects. If you imagine fumbling for a light switch in the dark, extracting an object from your pocket, or any of the other numerous things that you can do without even thinking — these all rely on touch sensing.”

“Software is finally catching up with the capabilities of our sensors,” Levine adds. “Machine learning algorithms inspired by innovations in deep learning and computer vision can process the rich sensory data from sensors such as the GelSight to deduce object properties. In the future, we will see these kinds of learning methods incorporated into end-to-end trained manipulation skills, which will make our robots more dexterous and capable, and maybe help us understand something about our own sense of touch and motor control.”

VENTURER driverless car project publishes results of first trials

VENTURER is the first Connected and Autonomous Vehicle project to start in the UK. The results of VENTURER preliminary trials show that the handover process is a safety critical issue in the development of Autonomous Vehicles (AVs).

The first VENTURER trials set out to investigate ‘takeover’ (time taken to reengage with vehicle controls) and ‘handover’ (time taken to regain a baseline/normal level of driving behaviour and performance) when switching frequently between automated and manual driving modes within urban and extra-urban settings. This trial is believed to be the first to directly compare handover to human driver-control from autonomous mode in both simulator and autonomous road vehicle platforms.

The handover process is important from a legal and insurance perspective – the length of time it takes people to regain full control of the vehicle represents a meaningful risk to insurers and understanding when control is transferred between the vehicle and the driver has liability implications.

David Williams from AXA outlined that, “The results of this trial have been very useful as we consider the issues that the handover process raises for insurers. Although some motor manufacturers have said they will skip SAE Level 3, some are progressing with vehicles that will require the driver to take back control of the vehicle. The insurance industry will need to assess the relative safety of the handover systems as they come to market but VENTURER’s trial 1 results show that with robust testing we can properly assess how humans and autonomous vehicles interact during this crucial phase of the technologies’ evolution.”

VENTURER designed, tested and analysed both simulator and road vehicle-based handover trials.

50 participants were tested in a simulator and/or in the autonomous vehicle on roads on UWE Bristol campus. The tests were at speeds of 20, 30, 40 and 50 mph in the simulator and 20 mph in the autonomous vehicle; speeds common in urban and extra-urban settings. Baseline driving behaviour of participants was also tested, and then the length of time it took them to return to this baseline following handover.

During the trial, the driver was aware that they might be alerted to take control of the vehicle at any moment, either due to the decisions made by the driver, or the capabilities of the vehicle in particular situations. VENTURER has classified this as planned handover.

The 20- and 30- mph scenarios involved town/city urban driving and the 40- and 50- mph scenarios involved outer-town/city extra-urban driving. Driving speed, lateral lane position, and braking behaviour (amongst other measures) were taken.

A key finding is that it took 2-3 seconds for participants to ‘takeover’ manual controls and resume active driving after short periods of autonomous driving in urban environments.

They also found that participants drove more slowly than the recommended speed limit for up to 55 seconds following a handover request, which suggests more cautious, but not necessarily safer, driving. This could be important for traffic management – if drivers on the road replicated this behaviour it might impact the flow of traffic and mitigate some of the predicated benefits of AVs.

Image: Local World

In addition, participants returned to their baseline manual driving behaviour after handover within 10-20 seconds, with most measures including speed, stabilising after 20-30 seconds. This was not the case within the highest speed simulator condition where stabilisation did not seem to occur on most measures within the 55 second measured handover period.

The team says these results have implications for the designers of autonomous vehicles with handback functionality, for example, in terms of phased handover systems. The results also inform the emerging market for insurance for autonomous vehicles.

Chair of the project, Lee Woodcock (Atkins) said, “The outcome of this research for trial one is significant and must provide food for thought as the market develops for driverless cars and how we progress through the different levels of automation. Further research must also explore interaction not just between vehicles but also with network operations and city management systems.”

Dr Phillip Morgan (UWE Bristol) said, “Designers need to proceed with caution and consider human performance under multiple driving conditions and scenarios in order to plot accurate takeover and handover time safety curves. In the time it takes for drivers to reach their baseline behaviour, the vehicle may have travelled some distance, depending on the speed. These initial trials show that there are some risk elements in the handover process and bigger studies with more participants may be needed to ensure there is sufficient data to build safe handover support systems.”

Professor Graham Parkhurst (UWE Bristol) said, “The results of these tests suggest that autonomous vehicles on highways should slow to a safe speed before handover is attempted. Further research is required to clarify what that safe speed is, but it would be substantially slower than the 70 mph motorway limit, and somewhat lower than the highest speed (50 mph) considered in our simulator trials.”

The trial clearly demonstrated that there were no major differences between control of the simulator and the Wildcat platforms used within the trial, validating the future use of simulators for the development of autonomous vehicles and associated technologies.

Click here to view the full results and papers.

Two stars, different fates

Levandowski (right) at MCE 2016. Source: Wikipedia Commons

Andy Rubin, who developed the Android operating system at Google then went on to lead Google through multiple acquisitions into robotics, has launched a new consumer products company. Anthony Levandowski, who, after many years with Google and their autonomous driving project, launched Otto which Uber acquired, was sued by Google, and just got fired by Uber.

People involved in robotics – from the multi-disciplined scientists turned entrepreneurs to all the specialists and other engineers involved in any aspect of the industry of making things robotic – are a relatively small community. Most people know (or know of) most of the others, and get-togethers like ICRA, the IEEE International Conference on Robotics and Automation, being held this week in Singapore, are an opportunity to meet new up-and-coming talent as they present their papers and product ideas and mingle with older, more established players. Two of those players made headline news this week: Rubin, launching Essential, and Levandowski, getting fired.

Andy Rubin

Rubin came to Google in 2005 when they acquired Android and left in 2014 to found an incubator for hardware startups, Playground Global. While at Google Rubin became an SVP of Mobile and Digital content including the open-source smartphone operating system Android and then started Google’s robotics group through a series of acquisitions. Android can be found in more than 2 billion phones, TVs, cars and watches.

2008 Google Developer Day in Japan – Press Conference: Andy Rubin

In 2007, Rubin was developing his own version of a smartphone at Google, also named Android, when Apple launched their iPhone, a much more capable and stylish device. Google’s product was scrapped but their software was marketed to HTC and their phone became Google’s first Android-based phone. The software was similar enough to Apple’s that Steve Jobs was furious and, as reported in Fred Vogelstein’s ‘Dogfight: How Apple and Google Went to War and Started a Revolution,’ called Rubin a “big, arrogant f–k” and “everything [he’s doing] is a f–king rip-off of what we’re doing.”

Jobs had trusted Google’s cofounders, Larry Page and Sergey Brin and Google’s CEO Eric Schmidt who was on Apple’s board. All three had been telling Jobs about Android, but they kept telling him it would be different from the iPhone. He believed them until he actually saw the phone and its software and how similar it was to the iPhone’s, whereupon he insisted Google make a lot of changes and removed Schmidt from Apple’s board. Rubin was miffed and had a sign on his office white board that said “STEVE JOBS STOLE MY LUNCH MONEY.”

Quietly, stealthily, Rubin went about creating “a new kind of company using 21st-century methods to build products for the way people want to live in the 21st century.” That company is Essential and Essential just launched and is taking orders for its new $699 phone and a still-stealthy home assistant to compete with Amazon’s Echo and Google’s Home devices.

Wired calls the new Essential Phone “the anti-iPhone.” The first Phones will ship in June.

Anthony Levandowski

In 2004, Levandowski and a team from UC Berkeley built and entered an autonomous motorcycle into the DARPA Grand Challenge. In 2007 he joined Google to work with Sebastian Thrun on Google Street View. Outside of Google he started a mobile mapping company that experimented with LiDAR technology and another to build a self-driving LiDAR-using a Prius. Google acquired both companies including their IP.

In 2016 Levandowski left Google to found Otto, a company making self-driving kits to retrofit semi-trailer trucks. Just as the kit was launched, Uber acquired Otto and Levandowski became the head of Uber’s driverless car operation in addition to continuing his work at Otto.

Quoting Wikipedia,

According to a February 2017 lawsuit filed by Waymo, the autonomous vehicle research subsidiary of Alphabet Inc, Levandowski allegedly “downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation” before resigning to found Otto.

In March 2017, United States District Judge William Haskell Alsup, referred the case to federal prosecutors after Levandowski exercised his Fifth Amendment right against self-incrimination. In May 2017, Judge Alsup ordered Levandowski to refrain from working on Otto’s LiDAR and required Uber to disclose its discussions on the technology. Levandowski was later fired by Uber for failing to cooperate in an internal investigation.

The Uncanny Valley of human-robot interactions

The device named “Spark” flew high above the man on stage with his hands waving in the direction of the flying object. In a demonstration of DJI’s newest drone, the audience marveled at the Coke can-sized device’s most compelling feature: gesture controls. Instead of a traditional remote control, this flying selfie machine follows hand movements across the sky. Gestures are the most innate language of mammals, and including robots in our primal movements means we have reached a new milestone of co-existence.

Madeline Gannon of Carnegie Mellon University is the designer of Mimus, a new gesture controlled robot featured in an art installation at The Design Museum in London, England. Gannon explained: “In developing Mimus, we found a way to use the robot’s body language as a medium for cultivating empathy between museum-goers and a piece of industrial machinery. Body language is a primitive yet fluid means of communication that can broadcast an innate understanding of the behaviors, kinematics and limitations of an unfamiliar machine.” Gannon wrote about her experiences recently in the design magazine Dezeen: “In a town like Pittsburgh, where crossing paths with a driverless car is now an everyday occurrence, there is still no way for a pedestrian to read the intentions of the vehicle…it is critical that we design more effective ways of interacting and communicating with them.”

So far, the biggest commercially deployed advances in human-robot interactions have been conversational agents by Amazon, Google and Apple. While natural language processing has broken new ground in artificial intelligence, the social science of its acceptability in our lives might be its biggest accomplishment. Japanese roboticist Masahiro Mori described the danger of making computer generated voices too indistinguishable from humans as the “uncanny valley.” Mori cautioned inventors from building robots that are too human sounding (and possibly looking) as the result elicits negative emotions best described as “creepy” and “disturbing.”

Recently, many toys have embraced conversational agents as a way of building greater bonds and increasing the longevity of play with kids. Barbie’s digital speech scientist, Brian Langner of ToyTalk, detailed his experiences with crossing into the “Uncanny Valley” as: “Jarring is the way I would put it. When the machine gets some of those things correct, people tend to expect that it will get everything correct.”

Kate Darling of MIT’s Media Lab, whose research centers on human-robot interactions, suggested that “if you get the balance right, people will like interacting with the robot, and will stop using it as a device and start using it as a social being.”

This logic inspired Israeli startup Intuition Robotics to create ElliQ—a bobbing head (eyeless) robot. The purpose of the animatronics is to break down barriers between its customer base of elderly patients and their phobias of technology. According to Intuition Robotics’ CEO, Dor Skuler, the range of motion coupled with a female voice helps create a bond between the device and its user. Don Norman, usability designer of ElliQ, said: “It looks like it has a face even though it doesn’t. That makes it feel approachable.”

Mayfield Robotics decided to add cute R2D2-like sounds to its newest robot, Kuri. Mayfield hired former Pixar designers Josh Morenstein and Nick Cronan of Branch Creative with the sole purpose of making Kuri more adorable. To accomplish this mission, Morenstein and Cronan gave Kuri eyes, but not a mouth as that would be, in their words “creepy.” Conan shares the challenges with designing the eyes: “Just by moving things a few millimeters, it went from looking like a dumb robot to a curious robot to a mean robot. It became a discussion of, how do we make something that’s always looking optimistic and open to listen to you?” Kuri has a remarkable similarity to Morenstein and Cronan’s former theatrical robot, EVA.

In the far extreme of making robots act and behave human, RealDoll has been promoting six thousand dollar sex robots. To many, RealDoll has crossed the “Uncanny Valley” of creepiness with sex dolls that look and talk like humans. In fact, there is a growing grassroots campaign to ban RealDoll’s products globally, as it endangers the very essence of human relationships. Florence Gildea writes on the organization’s blog: “The personalities and voices that doll owners project onto their dolls is pertinent for how sex robots may develop, given that sex doll companies like RealDoll are working on installing increasing AI capacities in their dolls and the expectation that owners will be able to customize their robots’ personalities.” The example given is how the doll expresses her “feelings” for her owner on Twitter:

Obviously a robot companion has no feelings, however it is a projection of the doll owners’. “To anthropomorphize their dolls to sustain the fantasy that they have feelings for the owner. The Twitter accounts seemingly manifest the dolls’ independent existence so that their dependence on their owners can seem to signify their emotional attachment, rather than it following inevitable from their status as objects. Immobility, then, can be misread as fidelity and devotion.” The implications of this behavior is that their female companion, albeit mechanical, enjoys “being dominated.” The fear that the Campaign Against Sex Robots expresses is the objectification of women (even robotic ones) reinforces problematic human sexual stereotypes.

Today, with technology at our fingertips, there is growing phenomena of preferring one-directional device relationships over complicated human encounters. MIT Social Sciences Professor Sherry Turkle writes in her essay, Close Engagements With Artificial Companionship, that “over-stressed, overworked, people claim exhaustion and overload. These days people will admit they’d rather leave a voice mail or send an email than talk face-to-face. And from there, they say: ‘I’d rather talk to the robot. Friends can be exhausting. The robot will always be there for me. And whenever I’m done, I can walk away.’”

In the coming years humans will communicate more with robots in their lives from experiences in the home to the office to their leisure time. The big question will be not the technical barriers, but the societal norms that will evolve to accept Earth’s newest species.

“What do we think a robot is?” asked robot designer Norm. “Some people think it should look like an animal or a person, and it should move around. Or it just has to be smart, sense the environment, and have motors and controllers.”

Norm’s answer, like beauty, could be in the eye of the beholder.

Wearable system helps visually impaired users navigate

New algorithms power a prototype system for helping visually impaired users avoid obstacles and identify objects. Courtesy of the researchers.

Computer scientists have been working for decades on automatic navigation systems to aid the visually impaired, but it’s been difficult to come up with anything as reliable and easy to use as the white cane, the type of metal-tipped cane that visually impaired people frequently use to identify clear walking paths.

White canes have a few drawbacks, however. One is that the obstacles they come in contact with are sometimes other people. Another is that they can’t identify certain types of objects, such as tables or chairs, or determine whether a chair is already occupied.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new system that uses a 3-D camera, a belt with separately controllable vibrational motors distributed around it, and an electronically reconfigurable Braille interface to give visually impaired users more information about their environments.

The system could be used in conjunction with or as an alternative to a cane. In a paper they’re presenting this week at the International Conference on Robotics and Automation, the researchers describe the system and a series of usability studies they conducted with visually impaired volunteers.

“We did a couple of different tests with blind users,” says Robert Katzschmann, a graduate student in mechanical engineering at MIT and one of the paper’s two first authors. “Having something that didn’t infringe on their other senses was important. So we didn’t want to have audio; we didn’t want to have something around the head, vibrations on the neck — all of those things, we tried them out, but none of them were accepted. We found that the one area of the body that is the least used for other senses is around your abdomen.”

Katzschmann is joined on the paper by his advisor Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science; his fellow first author Hsueh-Cheng Wang, who was a postdoc at MIT when the work was done and is now an assistant professor of electrical and computer engineering at National Chiao Tung University in Taiwan; Santani Teng, a postdoc in CSAIL; Brandon Araki, a graduate student in mechanical engineering; and Laura Giarré, a professor of electrical engineering at the University of Modena and Reggio Emilia in Italy.

Parsing the world

The researchers’ system consists of a 3-D camera worn in a pouch hung around the neck; a processing unit that runs the team’s proprietary algorithms; the sensor belt, which has five vibrating motors evenly spaced around its forward half; and the reconfigurable Braille interface, which is worn at the user’s side.

The key to the system is an algorithm for quickly identifying surfaces and their orientations from the 3-D-camera data. The researchers experimented with three different types of 3-D cameras, which used three different techniques to gauge depth but all produced relatively low-resolution images — 640 pixels by 480 pixels — with both color and depth measurements for each pixel.

The algorithm first groups the pixels into clusters of three. Because the pixels have associated location data, each cluster determines a plane. If the orientations of the planes defined by five nearby clusters are within 10 degrees of each other, the system concludes that it has found a surface. It doesn’t need to determine the extent of the surface or what type of object it’s the surface of; it simply registers an obstacle at that location and begins to buzz the associated motor if the wearer gets within 2 meters of it.

Chair identification is similar but a little more stringent. The system needs to complete three distinct surface identifications, in the same general area, rather than just one; this ensures that the chair is unoccupied. The surfaces need to be roughly parallel to the ground, and they have to fall within a prescribed range of heights.

Tactile data

The belt motors can vary the frequency, intensity, and duration of their vibrations, as well as the intervals between them, to send different types of tactile signals to the user. For instance, an increase in frequency and intensity generally indicates that the wearer is approaching an obstacle in the direction indicated by that particular motor. But when the system is in chair-finding mode, for example, a double pulse indicates the direction in which a chair with a vacant seat can be found.

The Braille interface consists of two rows of five reconfigurable Braille pads. Symbols displayed on the pads describe the objects in the user’s environment — for instance, a “t” for table or a “c” for chair. The symbol’s position in the row indicates the direction in which it can be found; the column it appears in indicates its distance. A user adept at Braille should find that the signals from the Braille interface and the belt-mounted motors coincide.

In tests, the chair-finding system reduced subjects’ contacts with objects other than the chairs they sought by 80 percent, and the navigation system reduced the number of cane collisions with people loitering around a hallway by 86 percent.

May 2017 fundings, acquisitions, IPOs and failures

May 2017 had two robotics-related companies get $9.5 billion in funding and 22 others raised $249 million. Acquisitions also continued to be substantial with Toyota Motor’s $260 million acquisition of Bastian Solutions plus three others (where the amounts weren’t disclosed).

Fundings

  1. Didi Chuxing, the Uber of China, raised $5.5 billion in a round led by SoftBank with new investors Silver Lake Kraftwerk joining previous investors SoftBank, China Merchants Bank and Bank of Communications. According to TechCrunch, this latest round brings the total raised by DiDi to about $13 billion. Uber, by comparison, has raised $8.81 billion.
  2. Nvidia Corp, a Santa Clara, CA-based speciality GPU maker, raised $4 billion (representing a 4.9% stake in the company) according to Bloomberg. Nvidia’s newest chips are focused on providing power for deep learning for self-driving vehicles.
  3. ClearMotion, a Woburn, MA automotive technology startup that’s building shock absorbers with robotic, software-driven adaptive actuators for car stability, has raised $100 million in a Series C round led by a group of JP Morgan clients and NEA, Qualcomm Ventures and more.
  4. Echodyne, a Bellevue, WA developer of radar vision technology used in drones and self-driving cars, has raised $29 million in a Series B round led by New Enterprise Associates and joined by Bill Gates, Madrona Venture Group, and others.
  5. DeepMap, a Silicon Valley mapping startup, raised $25 million in a round led by Accel and included GSR Ventures and Andreessen Horowitz.

    “Autonomous vehicles are tempting us with a radically new future. However, this level of autonomy requires a highly sophisticated mapping and localization infrastructure that can handle massive amounts of data. I’m very excited to work with the DeepMap team, who have the requisite expertise in mapping, vision, and large scale operations, as they build the core technology that will fuel the next generation of transportation,” said Martin Casado, general partner at Andreessen Horowitz.

  6. Hesai Photonics Technology, a transplanted Silicon Valley-to-Shanghai sensor startup, raised $16 million in a Series A round led by Pagoda Investment with participation from Grains Valley Venture Capital, Jiangmen Venture Capital and LightHouse Capital Management. Hesai is developing a hybrid LiDAR device for self-driving cars. Hesai has already partnered with a number of autonomous driving technology and car companies including Baidu’s Chinese electric vehicle start-up NIO and self-driving tech firm UiSee.
  7. Abundant Robotics, a Menlo Park, CA-based automated fruit-picking tech developer, raised $10 million in venture funding. GV (Google Ventures) led the round, and was joined by BayWa AG and Tellus Partners. Existing partners Yamaha Motor Company, KPCB Edge, and Comet Labs also participated.
  8. TriLumina Corp., an Albuquerque, NM-based developer of solid-state automotive LiDAR illumination for ADAS and autonomous driving, closed a $9 million equity and debt financing. Backers included new investors Kickstart Seed Fund and existing stakeholders Stage 1 Ventures, Cottonwood Technology Fund, DENSO Ventures and Sun Mountain Capital.
  9. Bowery Farming, a NYC indoor vertical farm startup, raised $7.5 million (in February) in a seed round led by First Round Capital and including Box Group, Homebrew, Flybridge, Red Swan, RRE, Lerer Hippeau Ventures, and Tom Colicchio – a restauranteur and judge on reality cooking show, Top Chef.
  10. Taranis, an Israel-based precision agriculture intelligence platform raised $7.5 million in Series A funding. Finistere Ventures led the round, and was joined by Vertex Ventures. Existing investors Eshbol Investments, Mindset Ventures, OurCrowd, and Eyal Gura participated.
  11. Ceres Imaging, the Oakland, CA aerial imagery and analytics company, raised a $5 million Series A round of funding led by Romulus Capital.
  12. Stanley Robotics, a Paris-based automated valet parking service developer, raised $4 million in funding. Investors included Elaia Partners, Idinvest Partners and Ville de Demain. Stanley’s new parking robot is a mobile car-carrying lift that moves and tightly parks cars in outdoor locations.
  13. AIRY3D Inc, a Canadian start-up in 3D computer vision, raised $3.5 million in a seed round co-led by CRCM Ventures and R7 Partners. Other investors include WI Harper Group, Robert Bosch Venture Capital, Nautilus Venture Partners and several angel investors that are affiliates of TandemLaunch, the Montreal-based incubator that spun out AIRY3D.
  14. SkyX Systems, a Canada-based unmanned aircraft system developer, raised around $3 million in funding from Kuang-Chi Group.
  15. Catalia Health, a San Francisco-based patient care management company applying robotics to improve personal health, raised $2.5 million in funding. Khosla Ventures led the round.
  16. vHive, an Israeli startup developing software to operate autonomous drone fleets, raised $2 million (in April) in an A round led by StageOne VC and several additional private investors.
  17. Vivacity Labs, a London AI tech and sensor startup, raised $2 million from Tracsis, Downing Ventures and the London Co-Investment Fund and was also granted an additional $1.3 million from Innovate UK to create sensors with built-in machine learning to identify individual road users and manage traffic accordingly.
  18. Bluewrist, a Canadian integrator of vision systems, raised around $1.5 million (in February) from Istuary Toronto Capital.
  19. American Robotics, a Boston-based commercial farming drone system and analytics developer, raised $1.1 million in seed funding. Investors included Brain Robotics Capital.
  20. Kubo, a Danish educational robot startup, raised around $1 million from the Danish Growth Fund. Kubo is an educational robot that helps kids learn coding, math, language and music in a screenless, tangible environment.
  21. Zeals, a Japanese startup which produces interaction software for robots such as Palmi and Sota, has closed a $720k investment from Japanese adtech firm FreakOut Holdings.
  22. Kitty Hawk, a San Francisco drone platform startup, raised $600k in seed money in March from The Flying Object VC.
  23. Kraken Sonar, a Newfoundland marine tech startup, raised around $500k from RDC, a provincial Crown corporation responsible for improving Newfoundland and Labrador’s research and development. The funding will be used to develop the ThunderFish program which will combine smart sonar, laser and optical sensors, advanced pressure tolerant battery and thruster technologies and cutting edge artificial intelligence algorithms integrated onboard a cost effective AUV capable of 20,000 foot depths.
  24. Motörleaf, a Canadian ag sensor, communications and software startup, raised an undisclosed amount in a seed round (in March).

Acquisitions

  1. Toyota Motor Corp paid $260 million to acquire Bastian Solutions, a U.S.-based materials handling systems integrator. Toyota is the world’s No. 1 forklift truck manufacturer in terms of global market share. With this acqisition Toyota is making a “full-scale entry” into the North American logistics technology sector and will also use Bastian’s systems to make its own global supply chain more efficient.
  2. Ctrl.Me Robotics, a Hollywood, CA drone startup, was acquired by Snap, Inc. for “an amount less than $1 million.” Ctrl Me developed a system for capturing movie-quality aerial video but was recently winding down its operations. Snap acquired its assets and technology as well as talent. Snap was already using one of Ctrl.me’s products: Spectacles, which captures video for Snap’s mobile app.
  3. Applied Research Associates, Inc. (ARA), an employee-owned scientific research and engineering company, acquired Neya Systems LLC on April 28, 2017. Neya Systems LLC is known for their development of unmanned systems for defense, homeland security, and commercial users. Terms of the deal were not disclosed.
  4. Trimble has acquired Müller-Elektronik and all its subsidiary companies, for an undisclosed amount. Müller is a German manufacturer and integrator of farm implement controls, steering kits and precision farming solutions. The transaction is expected to close in the Q3 2017. Financial terms were not disclosed. Müller was key in the development of the ISOBUS communication protocol found in most tractors and towed implements, which allows one terminal to control several implements and machines, regardless of manufacturer.

IPOs

  1. Gamma 2 Robotics, a security robot maker, launched a $6 million private offering to accredited investors.
  2. Aquabotix, a Fall River, MA-headquartered company, raised $5.5 million from their IPO of UUV (ASX:UUV) on the Australian Securities Exchange (ASX). Aquabotix manufactures commercial and industrial underwater drone/camera systems and has shipped over 350 units worldwide.

Failures

  1. FarmLink LLC
  2. EZ Robotics (CN)

Europe regulates robotics: Summer school brings together researchers and experts in robotics

After a successful 2016 first edition, our next summer school cohort on The Regulation of Robotics in Europe: Legal, Ethical and Economic Implications will take place in Pisa at the Scuola Sant’Anna, from 3- 8 July.

When the Robolaw project came to an end – and we presented our results before the European Parliament – we clearly perceived that a leap was needed not only in some approaches to regulation but also in the way social scientists, as well as engineers, are trained.

Indeed, in order to undergo technical analysis in law and robotics, without being lured into science fiction, an adequate understanding of the peculiarities of the systems being studied is required. A bottom-up approach, like the one adopted by Robolaw and its guidelines, is essential.

Social scientists, and lawyers in particular, often lack such knowledge and thus tend to either make unreasonable assumptions – of technological developments that are farfetched or simply unrealistic – or misperceive what the pivoting point in the analysis is going to be. The notion of autonomy is a fitting example. The consequence, however, is not simply bad scientific literature, but potentially inadequate policies being developed, thence wrong decision – even legislative ones – being adopted, impacting research and development of new applications, while overlooking relevant issues and impairments.

Similarly, engineers working in robotics are often confronted with philosophical and legal debates involving the devices they research that they are not always equipped to understand. Those debates are instead precious for they allow to identify societal concerns and expectations that can be used to orient research strategically, and engineers ought also participate and have a say.

Ultimately, it is everybody’s interest to better address existing and emerging needs, fulfilling desires and avoiding eliciting – often ungrounded – fears. This is what the European Union understands as Responsible Research and Innovation, but it also the prerequisite for the diffusion of new applications in society and the emergence of a sound robotic industry. Moreover, the current tendency in EU regulation favouring by design approaches – whereby privacy or other rights need to be enforced through the very functioning of the device – require technicians to consider such concerns early on, during the development phase of their products.

A common language needs thus be created, to avoid a babel-tower effect, that preserves each one’s peculiarities and specificities, yet allowing close cooperation.

A multidisciplinary approach, grounded in philosophy – ethics in particular –, law – and law and economics methodologies – economics and innovation management, and engineering is required.

With that idea in mind, we competed and won a Jean Monnet grant – a prestigious funding action of the EU Commission, mainly directed towards the promotion of education and teaching activities – with a project titled: Europe Regulates Robotics and organized the first edition of the Summer School The Regulation of Robotics in Europe: Legal, Ethical and Economic Implications in 2016.

The opening event of the Summer School saw the participation of MEP (Member of the EU Parliament) Mady Delvaux-Stehres, who presented what was then the draft recommendation – now approved – of the EU Parliament on the Civil Law Rules of Robotics, Mihalis Kritikos – a Policy Analyst of the Parliament –, who personally contributed to the drafting of that document, Maria Chiara Carrozza – former Italian minister of University Education and Research, professor of robotics and member of the Italian Senate – discussing Italian political initiatives.  We also had entrepreneurs, such as Roberto Valleggi, and engineers coming from the industry, such as Arturo Baroncelli – from Comau – and academia, Fabio Bonsignorio, who also taught the course.

Classes dealt with methodologies – the Robolaw approach – notions of autonomy, liability – and different liability models – privacy, benchmarking and robot design, machine ethics and human enhancement through technology, innovation management and technology assessment. Students also had the chance to visit the Biorobotics Institute laboratories in Pontedera (directed by Prof. Paolo Dario) and see many different applications and research being carried out, directly explained by the people who work on them.

The most impressive part was, however, our class. We managed to put together a truly international group of young and bright minds, many of which already enrolled in a PhD program – in law, philosophy, engineering and management – coming from Universities such as Edinburgh, London School of Economics, Sorbonne, Cambridge, Vienna, Bologna, Suor Orsola, Bicocca, Milan, Hannover, Pisa, Pavia and Freiburg. Other came from prominent European robotic industries, were practitioners, entrepreneurs, policy makers from EU institutions.

At the end of the Summer School, some presented their research on a broad set of extremely interesting topics, such as driverless car liability and innovation management, machine ethics and the trolley problem, anthrobotics and decision-making in natural and artificial systems.

We had vivid in class debates. A true community was created that is still in contact today. Five of our students actively participated in the 2017 European Robotics Forum in Edinburgh and more are working and publishing on such matters.

We can say we truly achieved our goal! However, the challenge has just begun. We want to reach out to more people and replicate this successful initiative. A second edition of the Summer School will take place again this year in Pisa at the Scuola Sant’Anna from July 3rd to 8th and we are accepting applications until June 7th.

I am certain we will manage again to put together an incredible group of individuals, and I can’t wait to meet our new class. On our side, we are preparing a lot of interesting surprises for them, including the participation of policy makers involved in the regulation of robotics at EU level to provide a first-hand look at what is happening in the EU.

More information about the summer school can be found on our website here.

Registration to the summer school can be found here.

Researcher to develop bio-inspired ‘smart’ knee for prosthetics

A researcher at the University of the West of England (UWE Bristol) is developing a bio-inspired ‘smart’ knee joint for prosthetic lower limbs. Dr Appolinaire Etoundi, based at Bristol Robotics Laboratory, is leading the research and will analyse the functions, features and mechanisms of the human knee in order to translate this information into a new bio-inspired procedure for designing prosthetics.

Dr Etoundi gained his PhD in bio-inspired technologies from the University of Bristol where he developed a design procedure for humanoid robotic knee joints.  He is now turning his attention to nature, a growing area in robotics known as Bio-mimicry, combining curiosity about how biological systems work with solving complex engineering problems, in order to develop a prototype smart knee joint for prosthetics.

Andy Lewis, a Paralympic Triathlon Gold Medallist (Rio 2016), who wears a lower limb prosthetic, will try out the new joint once developed, to compare its energy consumption and gait efficiency to current prosthetics. There are currently approximately 100,000 knee replacement operations performed every year in the UK.  Lower limb amputation has a profound effect on daily life, and prosthesis must be comfortable and adapted to people so they can maintain daily activities such as walking and running.

Looking for inspiration in nature, Dr Etoundi will examine how the human knee works, as well as looking closely at the design of knee replacements used in surgery and at current knee joints in prosthetic limbs.  These three areas of knowledge will inform a procedure for designing a knee that could give greater, more responsive movement, while offering the control and intelligence that comes from robotics.

Dr Etoundi says, “I have spent years designing knee joints for humanoid robots, but the human knee has evolved over millions of years and is incredibly successful.  The human knee is a very complex joint with ligaments, which guide the motion of the knee, and bones that perform the motion.  Current mechanisms in prosthetic knees have a straightforward pin joint with ball bearings that does not have the sophisticated range of motion and stability of the human knee with its cruciate ligaments.

“The complex interaction between the soft tissue (ligaments) and the bones in the knee joint is an area that has yet to be replicated in prosthetics.  We need to understand this better in order to provide a better knee joint for people to use. I will study the different mechanisms within the knee joint and look for ways to translate its beneficial functionalities into a design concept for prosthetics.

“I want to create a prosthetic knee that will give the greatest range of motion with the least friction, enabling walking, climbing stairs, squatting and stability, while also offering important attributes of current prosthetics and the benefits of robotic technology.”

Andy Lewis, who will try out Dr Etoundi’s nature-inspired design says, “I was pleased when Appo approached me. He understands the importance of a good prosthetic for sports people, and it will be interesting to see what he discovers that might make a better prosthetic which is more responsive.  I am looking forward to seeing his early designs next year and trying them out.”

The research team includes Professor Richie Gill (University of Bath), Dr Ravi Vaidyanathan (Imperial College London) and Dr Michael Whitehouse (University of Bristol).

Dr Etoundi is a Senior Lecturer in Mechatronics at UWE Bristol and is a member of the Medical Robotics group at Bristol Robotics Laboratory, which looks at the application of robotic technology in human-controlled and surgical applications.

 

Artificial intelligence: Europe needs a “human in command approach,” says EESC

Credit: EESC

The EU must pursue a policy that ensures the development, deployment and use of artificial intelligence (AI) in Europe in favor, and not conducive to the detriment, acts of society and social welfare, the Committee said in an initiative opinion on the social impact of AI which 11 fields are identified for action.

“We have a human need in-command approach to artificial intelligence, with machines remain machines and people always maintain control of these machines” said rapporteur Catelijne Muller (NL – Workers’ Group). She was not just about technical check: “People can and must decide whether, when and how AI is used in our daily lives, what tasks we entrust to AI and how transparent and ethical is all. Eventually, we decide whether we want that certain activities are carried out, care or medical decisions are made by AI, and whether we want to accept that our AI security, privacy and autonomy might be jeopardized,” said Mrs. Muller.

Artificial intelligence has recently undergone rapid growth. The size of the market for AI is approximately USD 664 million and is expected to increase to 38.8 billion USD in 2025. It is virtually undisputed that artificial intelligence can have great social benefits: consider applications for sustainable agriculture, environmentally friendly production, safer traffic safety work, a safer financial system, better medicine and more. Artificial intelligence can even possibly contribute to the eradication of disease and poverty.

But the benefits of AI can only be realized as well as the challenges to be addressed related to it. The Committee has identified 11 areas in which AI ensures social challenges, ranging from ethics, security, transparency, privacy and standards, employment, education, (in) equality and inclusiveness, legislation, governance and democracy, besides warfare and super intelligence.

These challenges can not be filed with the industry, this is a matter for governments, social partners, scientists and companies together. The EESC believes that the EU should adopt policy frameworks herein and should play a global leadership role. “We have pan-European norms and standards required for AI, as we now have for food and household appliances. We need a pan-European code of ethics to ensure that remain compatible AI systems with the principles of dignity, integrity, freedom and cultural and gender diversity, as well as basic human rights, “said Catelijne Muller,” and we have employment strategies are needed to maintain or create jobs and ensure that employees remain autonomous and take pleasure in their work. “

The issue of the impact of AI on employment is indeed central to the debate on AI in Europe, where unemployment is still high because of the crisis. Although predictions about the number of jobs over the next 20 years will be lost due to the use of AI vary from a small 5% to a disastrous 100%, resulting in a society without jobs, the rapporteur believes, based on a recent report McKinsey that it is more likely that parts or parts of jobs, and not a complete job, will be taken over by AI. In this case, it comes down to education, lifelong learning and training, to ensure that workers benefit from these developments and not be victimized.

The EESC opinion also calls for a European AI infrastructure with open source privacy-respecting and learning environments, real-life test environments and high-quality data sets for training and development of AI systems. Artificial intelligence has been mainly developed by the “big five” (Amazon, Facebook, Apple, Google and Microsoft). Although these companies are in favor of the open development of AI, and some of their AI development platforms and offer open source, full accessibility this is not guaranteed. AI infrastructure for the EU, possibly even with a European AI certification or labeling, can not only help promote the development of responsible and sustainable AI but also the EU competitive advantage.

Page 426 of 429
1 424 425 426 427 428 429