News

Page 423 of 433
1 421 422 423 424 425 433

The ups (and downs) of tech, robotic and AI funding

Source: Getty Images

SoftBank’s Pepper humanoid robot operation (a joint venture with Foxconn, Alibaba and SoftBank) has incurred a big $274 million loss while Asia more than doubled the amount of funding for tech startups thus far in 2017. No one ever said VC funding was for the faint of heart.

The Ups

According to PwC and CB Insights, venture capital investments in Asia in the first six months of 2017 totaled $28.8 billion. VC investments in North America for the same period totaled $18.4 billion.

Source: PwC | CB Insights MoneyTree™ Report Q2 2017
Read more at: https://www.dealstreetasia.com/stories/asia-overtakes-us-for-the-first-time-in-vc-funding-cbinsights-77547/

CB Insights reports that 45% of all dollars invested in tech in 2017 went to Asian firms. 

Largest deals in Asia so far this year included Didi Chuxing raising $5.5 billion, One97 Communications ($1.4 billion), GO-JEK ($1.2 billion), Bytedance ($1 billion) and Ele.me ($1 billion).

Largest deals in North America in the quarter included San Francisco-based Lyft – which raised $600 million, Outcome Health ($500 million), Group Nine Media ($485 million), Houzz ($400 million), and Guardant Health ($360 million).

The number of deals around the world, as shown in the chart above, remains heavily in the West. Almost every day the news reports another fund being set up to invest in one area of tech or another. For example, Toyota Motor Corp today announced a $100 million fund (Toyota AI Ventures) for AI and robotics startups and have already made some initial investments. The first three are for a maker of cameras that monitor drivers and roads, a creator of autonomous car-mapping algorithms, and a developer of robotic companions for the elderly.

The Downs

Nikkei Asian Review reports on SoftBank Robotics’ $274 million loss which they attribute to the Pepper humanoid robot joint venture with Alibaba and Foxconn. The subsidiary was established in 2014 and began consumer sales of Pepper in June 2015 and business sales that October.

“Although the company does not release earnings, it recorded sales of 2.2 billion yen and a net loss of 11.7 billion yen in fiscal 2015, according to Tokyo Shoko Research. That is markedly worse than the 2.3 billion yen net loss from fiscal 2014. 'Pepper is unprofitable because of its relatively low price for a humanoid robot, costing just 198,000 yen ($1,750), which cannot cover development costs.'”

A SoftBank PR statement said that they will increase corporate sales and improve earnings through related businesses such as apps and content and that sales are good.

Miniaturizing the brain of a drone


By Jennifer Chu

In recent years, engineers have worked to shrink drone technology, building flying prototypes that are the size of a bumblebee and loaded with even tinier sensors and cameras. Thus far, they have managed to miniaturize almost every part of a drone, except for the brains of the entire operation — the computer chip.

Standard computer chips for quadcoptors and other similarly sized drones process an enormous amount of streaming data from cameras and sensors, and interpret that data on the fly to autonomously direct a drone’s pitch, speed, and trajectory. To do so, these computers use between 10 and 30 watts of power, supplied by batteries that would weigh down a much smaller, bee-sized drone.

Now, engineers at MIT have taken a first step in designing a computer chip that uses a fraction of the power of larger drone computers and is tailored for a drone as small as a bottlecap. They will present a new methodology and design, which they call “Navion,” at the Robotics: Science and Systems conference, held this week at MIT.

The team, led by Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics at MIT, and Vivienne Sze, an associate professor in MIT’s Department of Electrical Engineering and Computer Science, developed a low-power algorithm, in tandem with pared-down hardware, to create a specialized computer chip.

The key contribution of their work is a new approach for designing the chip hardware and the algorithms that run on the chip. “Traditionally, an algorithm is designed, and you throw it over to a hardware person to figure out how to map the algorithm to hardware,” Sze says. “But we found by designing the hardware and algorithms together, we can achieve more substantial power savings.”

“We are finding that this new approach to programming robots, which involves thinking about hardware and algorithms jointly, is key to scaling them down,” Karaman says.

The new chip processes streaming images at 20 frames per second and automatically carries out commands to adjust a drone’s orientation in space. The streamlined chip performs all these computations while using just below 2 watts of power — making it an order of magnitude more efficient than current drone-embedded chips.

Karaman, says the team’s design is the first step toward engineering “the smallest intelligent drone that can fly on its own.” He ultimately envisions disaster-response and search-and-rescue missions in which insect-sized drones flit in and out of tight spaces to examine a collapsed structure or look for trapped individuals. Karaman also foresees novel uses in consumer electronics.

“Imagine buying a bottlecap-sized drone that can integrate with your phone, and you can take it out and fit it in your palm,” he says. “If you lift your hand up a little, it would sense that, and start to fly around and film you. Then you open your hand again and it would land on your palm, and you could upload that video to your phone and share it with others.”

Karaman and Sze’s co-authors are graduate students Zhengdong Zhang and Amr Suleiman, and research scientist Luca Carlone.

From the ground up

Current minidrone prototypes are small enough to fit on a person’s fingertip and are extremely light, requiring only 1 watt of power to lift off from the ground. Their accompanying cameras and sensors use up an additional half a watt to operate.

“The missing piece is the computers — we can’t fit them in terms of size and power,” Karaman says. “We need to miniaturize the computers and make them low power.”

The group quickly realized that conventional chip design techniques would likely not produce a chip that was small enough and provided the required processing power to intelligently fly a small autonomous drone.

“As transistors have gotten smaller, there have been improvements in efficiency and speed, but that’s slowing down, and now we have to come up with specialized hardware to get improvements in efficiency,” Sze says.

The researchers decided to build a specialized chip from the ground up, developing algorithms to process data, and hardware to carry out that data-processing, in tandem.

Tweaking a formula

Specifically, the researchers made slight changes to an existing algorithm commonly used to determine a drone’s “ego-motion,” or awareness of its position in space. They then implemented various versions of the algorithm on a field-programmable gate array (FPGA), a very simple programmable chip. To formalize this process, they developed a method called iterative splitting co-design that could strike the right balance of achieving accuracy while reducing the power consumption and the number of gates.

A typical FPGA consists of hundreds of thousands of disconnected gates, which researchers can connect in desired patterns to create specialized computing elements. Reducing the number gates with co-design allowed the team to chose an FPGA chip with fewer gates, leading to substantial power savings.

“If we don’t need a certain logic or memory process, we don’t use them, and that saves a lot of power,” Karaman explains.

Each time the researchers tweaked the ego-motion algorithm, they mapped the version onto the FPGA’s gates and connected the chip to a circuit board. They then fed the chip data from a standard drone dataset — an accumulation of streaming images and accelerometer measurements from previous drone-flying experiments that had been carried out by others and made available to the robotics community.

“These experiments are also done in a motion-capture room, so you know exactly where the drone is, and we use all this information after the fact,” Karaman says.

Memory savings

For each version of the algorithm that was implemented on the FPGA chip, the researchers observed the amount of power that the chip consumed as it processed the incoming data and estimated its resulting position in space.

The team’s most efficient design processed images at 20 frames per second and accurately estimated the drone’s orientation in space, while consuming less than 2 watts of power.

The power savings came partly from modifications to the amount of memory stored in the chip. Sze and her colleagues found that they were able to shrink the amount of data that the algorithm needed to process, while still achieving the same outcome. As a result, the chip itself was able to store less data and consume less power.

“Memory is really expensive in terms of power,” Sze says. “Since we do on-the-fly computing, as soon as we receive any data on the chip, we try to do as much processing as possible so we can throw it out right away, which enables us to keep a very small amount of memory on the chip without accessing off-chip memory, which is much more expensive.”

In this way, the team was able to reduce the chip’s memory storage to 2 megabytes without using off-chip memory, compared to a typical embedded computer chip for drones, which uses off-chip memory on the order of a few gigabytes.

“Any which way you can reduce the power so you can reduce battery size or extend battery life, the better,” Sze says.

This summer, the team will mount the FPGA chip onto a drone to test its performance in flight. Ultimately, the team plans to implement the optimized algorithm on an application-specific integrated circuit, or ASIC, a more specialized hardware platform that allows engineers to design specific types of gates, directly onto the chip.

“We think we can get this down to just a few hundred milliwatts,” Karaman says. “With this platform, we can do all kinds of optimizations, which allows tremendous power savings.”

This research was supported, in part, by Air Force Office of Scientific Research and the National Science Foundation.

IROS Workshop: Best practices in designing roadmaps for robotics innovation

Join us at the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) for a full day workshop that will bring together international stakeholders in robotics to examine best practices for accelerating robotics innovation through strategic policy frameworks.

Best practices in designing effective roadmaps for robotics innovation

 

IROS Workshop | 8:30-17:00 September 28, 2017 | Vancouver Convention Centre | Vancouver BC

IROS 2017 Vancouver, September 24-28

This is a unique opportunity to learn from people who have played a significant role in designing and implementing major strategic robotics initiatives around the globe.

Objectives
In the past decade, a number of governing bodies and industry consortia have developed strategic roadmaps to guide investment and development of robotic technology. With the roadmaps from the US, South Korea, Japan and EU etc. well underway, the time is right to take stock of these strategic robotics initiatives to see what is working, what is not, and what best practices in roadmap development might be broadly applied to other regions.

The objective of this two-part workshop is to examine the process of how these policy frameworks came to be created in the first place, how they have been tailored to local capabilities and strengths, and what performance indicators are being used to measure their success — so that participants may draw from international collective experience as they design and evaluate strategic robotics initiatives for their own regions.

Program Part ONE — Morning Session: “Developing innovation policy for robotics, and establishing key performance indicators that are relevant to your region”
The morning session will feature international speakers who have played a significant role in launching and shaping major strategic robotics initiatives across the globe. The focus of this session will be on the history and process of designing the roadmap (rather than on merely presenting the roadmap itself) and key performance indicators of roadmap success. Via presentations and panel discussion, the outcome of this session will be an exploratory overview of best practices and key performance metrics, so that participants can apply knowledge gained from the workshop as they design strategic robotics policy frameworks for their own regional or national contexts.

Program Part TWO — Afternoon Session: “Towards a national robotics strategy for Canada”
The afternoon session will bring together leading Canadian robotics experts from academia, industry, federal/provincial policy, and the national research council to discuss and strategize the future of robotics in Canada, with an emphasis on addressing the social, economic, legal/ethical and regulatory issues, and the robotics strengths and capabilities specific to this country. The main goals of this session will be to 1) establish a clear picture of the internal Canadian robotics landscape and how it compares to other nations worldwide, 2) discuss lessons learned in the International session within the Canadian context, and 3) discuss and identify gaps and opportunities for a Canadian initiative. Ultimately the Canadian session will serve as a venue for collecting data and viewpoints to support the development of a Canadian Robotics Roadmap.

Who should attend
This workshop is open to all members of academia, government, and industry with an interest in funding, policy and research strategy.

Part One of the workshop (morning session) will be broadly applicable to anyone with an interest in robotics policy, partnerships and funding.

Part Two (afternoon session) will be of particular interest to Canadian conference-goers as well as those who are interested in Canadian research and industry partnerships.

Call for participation
We are actively seeking participants for this workshop. If you are

involved in developing or evaluating a major strategic robotics initiative in your region and would like to participate in our international discussion, OR

are a member of the Canadian robotics ecosystem or are a Canadian roboticist living abroad who would like to be involved in a national robotics strategy for Canada

then please send an email with your expression of interest to: info@canadianroboticsnetwork.com. We look forward to hearing from you!!

Workshop Organizers

  • Elizabeth Croft, Director, Collaborative Advanced Robotics and Intelligent Systems (CARIS) Lab, UBC
  • Clément Gosselin, Professor, Laboratoire de Robotique, Université Laval
  • Paul Johnston, Research Policy Consultant, former President of Precarn Incorporated
  • Dana Kulić, Associate Professor, Adaptive Systems Laboratory, University of Waterloo
  • AJung Moon, Co-Founder & Director, Open Roboethics Institute
  • Angela Schoellig, Assistant Professor, Institute for Aerospace Studies (UTIAS), University of Toronto
  • Hallie Siegel, Strategic Foresight & Innovation @ OCAD U, former Managing Editor @Robohub

Workshop Registration

Registration for IROS workshops is separate from the IROS conference fee. Please see the IROS Website for details: http://www.iros2017.org/registration-travel/registration# 

Early workshop registration deadline: July 22nd.

More Info
http://canadianroboticsnetwork.com/iros-workshop/

http://www.iros2017.org/program/workshops-and-tutorials

Contact
info@canadianroboticsnetwork.com

The Drone Center’s Weekly Roundup: 7/10/17

The U.S. Army is developing a drone that moves like a flying squirrel. Credit: David McNally/U.S. Army

News

A U.S. drone strike in Somalia targeted members of al-Shabab. It is the second drone strike in Somalia since President Trump relaxed rules for targeting members of the al-Qaeda-allied group. (New York Times)

The U.S. Federal Aviation Administration is offering refunds to drone hobbyists who paid the $5 fee to register with the agency. The move follows a federal court ruling in May that found that the FAA could not compel recreational drone users to register. The FAA has collected over $4 million in fees since it implemented the registration policy in December 2015. (Recode)

Commentary, Analysis, and Art

At the Washington Post, Greg Jaffe profiles U.S. Air Force analysts who examine video imagery from drones.

In a report published by the Mitchell Institute, Gen. David Deptula argues that the U.S. Department of Defense should create an office for unmanned aircraft to coordinate efforts across the different services. (Breaking Defense)

In Drone Warrior, Brett Velicovich and Christopher S. Stewart offer an insider’s account of running U.S. targeted killing operations. (Wired)

At Lawfare, Kenneth Anderson and Matthew Waxman present a primer on the legal and ethical debates over autonomous weapons.

At the Jamestown Foundation, Elsa Kania writes that the China is seeking to leverage recent advances in swarming drones to bolster its military.

This month’s Signal magazine includes several articles on drones and robots. (AFCEA)

At Breaking Defense, Sydney J. Freedberg Jr. looks at how the lack of trust between humans and unmanned systems will inhibit the integration of robots in the future.

At the Morning Consult, Edward Graham writes that recent legislation in Congress would enable localities to have a greater say in drone regulations.

At TechCrunch, Helen Greiner discusses how robots and drones have taken on more and more roles.

At Poynter, Melody Kramer considers how recent aerial images of New Jersey Gov. Chris Christie on an empty beach demonstrate how drones are becoming effective newsgathering tools.

At Drone360, Lauren Sigfusson looks at how the FAA’s Part 107 waiver and authorization process has changed in recent weeks.

At Arkansas Matters, Chris Pulliam says that crop dusters are concerned about the potential threats posed to their aircraft by drones.

At DefenseNews, Burak Ege Bekdil writes that Turkey is increasingly relying on drones for border security, counterterrorism, and operations against Kurdish groups.  

At the National Interest, Samuel Bendett considers whether Russia will ever be able to catch up with the U.S. and Israel in terms of drone development.

In a letter to the editor of the Pocono Record, Pete Sauvigne argues that a local model aircraft club should not be stripped of its permission to fly in the Delaware Water Gap National Recreation Area.

At The Drive, Marco Margaritoff looks at a few of National Geographic’s best drone photos of year to date. Know Your Drone

The U.S. Army is developing a drone that moves like a flying squirrel. (Popular Mechanics)

Meanwhile, the Army is developing a system that allows a single human operator to control multiple drones. (Press Release)

Defense firm Rafael has added a laser interceptor to its Drone Dome counter-drone system. (IHS Jane’s 360)

Italian firm Piaggio has resumed flight testing of its HammerHead military drone a year after the program was grounded due to a crash. (AIN Online)

Israeli defense firm Elbit has demonstrated its Seagull unmanned surface vehicle in an end-to-end mine countermeasures mission. (IHS Jane’s 360)

A group of researchers in Florida is developing an underwater drone that seeks out and collects lionfish, an invasive species in the area. (Pensacola News Journal)

Researchers at the University of California, Berkeley are developing a jumping one-legged robot that could eventually be used for search and rescue. (Wired)

The U.S. Coast Guard will begin evaluating different small drones with the goal of acquiring a system in 2018. (C4ISRNet)

Meanwhile, South Korea’s Coast Guard is looking to equip its offshore patrol vessels with aerial surveillance drones by 2020. (IHS Jane’s 360)

The Netherlands Aerospace Centre is testing a small jet-powered fixed-wing drone. (Unmanned Systems Technology)

Hot dog company Oscar Mayer revealed the newest addition to its WienerFleet, the WienerDrone, a hot dog delivery drone. (Fortune)

Drones at Work

An inmate who broke out of a South Carolina prison reportedly escaped using tools delivered to him by a drone. (The New York Times)

The Russian Navy is reportedly displeased with the performance of its Inspector Mk2 unmanned surface vehicle developed by French firm ECA Group. (Mil.Today)

Officials in Kaziranga National Park in India are using drones to monitor wildlife displaced by recent floods in the area. (Hindustan Times)

The Redondo Beach Police Department in California used a drone to conduct aerial surveillance during the local Independence Day celebrations. (Easy Reader News)

Meanwhile, Police in Nashville, Tennessee arrested a man for flying a drone over a large crowd during a Fourth of July celebration. (Fox 17)

The French Air Force flew one of its MQ-9 Reaper drones in civilian domestic airspace for the first time. (Aviation Analysis Wing)

Ohio has passed a law that permits ground delivery robots to operate on sidewalks. It is the fifth U.S. state to pass such a law. (Recode)

Florida governor Rick Scott signed a bill that establishes statewide regulations for drone use. (Flying Magazine)

A volunteer rescue group in Italy is using drones to look for signs of possible impending rock slides before they happen. (Motherboard)

Industry Intel

The U.S. Army awarded Assist Consultants a $18.1 million contract to build a facility for the Navy’s MQ-4C Triton at Al Dhafra Air Base in the United Arab Emirates. (FBO)

The Department of Justice awarded AARDVARK a $51,247 contract for backpackable robots. (FBO)

Germany’s Bayer CropScience awarded SlantRange, a U.S. company that makes sensors for drones, a contract to collect data on crop breeding and research programs. (Agriculture.com)

Australia’s new Defense Cooperative Research Center will award $50 million in grants to develop autonomous capabilities for military unmanned systems. (Press Release)

The European Union selected Sensofusion, a Finnish company, to help develop a counter-drone security system. (Press Release)

Charles de Gaulle Airport in Paris has contracted Aveillant to install a counter-drone system called Gamekeeper. (FlightGlobal)

Germany is pausing plans to acquire the Israel Aerospace Industries Heron military drone until after its upcoming election. (DefenseNews)

IAI has agreed to transfer drone technology to India’s Dynamatic Technologies and Elcom Systems. (IHS Jane’s Defence Weekly)

Meanwhile, Indonesia and Turkey agreed to cooperate on the development of military systems and technologies, including drones. (IHS Jane’s Defence Weekly)

Kratos Defense & Security Solutions stock prices lifted 11 percent in June, due in part to news of a sale of attritable unmanned aircraft. (Motley Fool)

The Israeli military awarded Duke Robotics, a Florida-based startup, a contract for the TIKAD, a quadrotor drone that can be armed with a machine gun or grenade launcher. (Defense One)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

 

Swarms of smart drones to revolutionise how we watch sports

Credit: Flickr/ Ville Hyvönen

by Joe Dodgshun
Drone innovators are transforming the way we watch events, from football matches and boat races to music festivals.

Anyone who has watched coverage of a festival or sports event in the last few years will probably have witnessed commercial drone use — in the form of breathtaking aerial footage.

But a collaboration of universities, research institutes and broadcasters is looking to take this to the next level by using a small swarm of intelligent drones.

The EU-funded MULTIDRONE project seeks to create teams of three to five semi-automated drones that can react to and capture unfolding action at large-scale sports events.

Project coordinator Professor Ioannis Pitas, of the University of Bristol, UK, says the collaboration aims to have prototypes ready for testing by its media partners Deutsche Welle and Rai – Radiotelevisione Italiana within 18 months.

‘Deutsche Welle has two potential uses lined up – filming the Rund um Wannsee boat race in Berlin, Germany, and also filming football matches with drones instead of normal cameras – while Rai is interested in covering cycling races,’ said Prof. Pitas.

‘We think we have the potential to offer a much better film experience at a reduced cost compared to helicopters or single drones, producing a new genre in drone cinematography.’

‘We have the potential to offer a much better film experience at a reduced cost compared to helicopters or single drones, producing a new genre in drone cinematography.’

Professor Ioannis Pitas, University of Bristol, UK

But before they can chase the leader of the Tour de France, MULTIDRONE faces the hefty challenge of creating AI that allows its drones to safely carry out a mission as a team.

Prof. Pitas says safety is the utmost priority, so the drones will include advanced crowd avoidance mechanisms and the ability to make emergency landings.

And it’s not just safety in the case of bad weather, a flat battery or a rogue football.

‘Security of communications is important as a drone could otherwise be hijacked, not just undermining privacy but also raising the possibility that it could be used as a weapon,’ said Prof. Pitas.

The early project phase will have a strong focus on ethics to prevent any issues around privacy.

‘People are sensitive about drones and about being filmed and we’re approaching this in three ways — trying to avoid shooting over private spaces, getting consent from the athletes being followed, and creating mechanisms that decide which persons to follow and blur other faces.’

If they can pull it off, he predicts a huge boost for the European entertainment industry and believes it could lead to much larger drone swarms capable of covering city-wide events.

Drones-on-demand
According to Gartner research, sales of commercial-use drones are set to jump from 110 000 units in 2016 to 174 000 this year. Although 2 million toy drones were snapped up last year for USD 1.7 billion, the commercial market dwarfed this at USD 2.8 billion.

Aside from pure footage, drones have also proven their worth in research, disaster response, construction and even in monitoring industrial assets.

One company trying to open up the market to those needing a sky-high helping hand is Integra Aerial Services, a young drones-as-a-service company.

An offshoot of Danish aeronautics firm Integra Holding Group, INAS, was launched in 2014 thanks to an EU-backed feasibility study.

INAS has more than 25 years of experience in aviation and used its knowledge of the sector’s legislation to shape a business model targeting heavier, more versatile drones weighing up to 25 kilogrammes. And they have already been granted a commercial drone operating license by the Danish Civil Aviation Authority.

These bigger drones have far more endurance than typical toy drones, which can weigh anywhere from 250 grams to several kilos. INAS CEO Gilles Fartek says their bigger size means they can carry multiple sensors, thus collecting all the needed data in one fell swoop, instead of across multiple flights.

For example, one of their drones flies LIDAR (Light Detection and Ranging) radar over Greenland to measure ice thickness as a measure of climate change, but could also carry a 100 megapixel, high-definition camera.

While INAS spends most of the Arctic summer running experiments from the remote host Station Nord in Greenland, Fartek says they’re free to use the drones for different projects in other seasons, mostly in areas of environmental research, mapping and agricultural monitoring.

‘You can’t match the quality of data for the price, but drone-use regulations in Europe are still quite complicated and make between-country operations almost impossible,’ said Fartek.

‘The paradox is that you have an increasing demand for such civil applications across Europe and even in institutional areas like civil protection and maritime safety where they cannot use military drones.’

A single European sky
These issues, and more, should soon be addressed by SESAR, the project which coordinates all EU research and development activities in air traffic management. SESAR plans to deploy a harmonised approach to European airspace management by 2030 in order to meet a predicted leap in air traffic.

Recently SESAR unveiled its blueprint outlining how it plans to make drone use in low-level airspace safe, secure and environmentally friendly. They hope this plan will be ready by 2019, paving the way for an EU drone services market by safely integrating highly automated or autonomous drones into low-level airspace of up to 150 metres.

Modelled after manned aviation traffic management, the plan will include registration of drones and operators, provide information for autonomous drone flights and introduce geo-fencing to limit areas where drones can fly.

The Issue
Emerging drone sectors range from delivery services, collecting industry data, infrastructure inspections, precision agriculture, transportation and logistics.

The market for drone services is expected to grow substantially in the coming years with an estimated worth of EUR 10 billion by 2035.

To support high-potential small- and medium-sized enterprises (SMEs), the European Commission has allocated EUR 3 billion over the period 2014-2020. A further EUR 17 billion was set aside under the Industrial Leadership pillar of the EU’s current research funding programme Horizon 2020.

More info
MULTIDRONE

INAS

Robotics industry growing faster than expected

Two reputable research resources are reporting that the robotics industry is growing more rapidly than expected. BCG (Boston Consulting Group) is conservatively projecting that the market will reach $87 billion by 2025; Tractica, incorporating the robotic and AI elements of the emerging self-driving industry, is forecasting the market will reach $237 billion by 2022.

Both research firms acknowledge that yesterday’s robots — which were blind, big, dangerous and difficult to program and maintain — are being replaced and supplemented with newer, more capable ones. Today's new, and future robots, will have voice and language recognition, access to super-fast communications, data and libraries of algorithms, learning capability, mobility, portability and dexterity. These new precision robots can sort and fill prescriptions, pick and pack warehouse orders, sort, inspect, process and handle fruits and vegetables, plus a myriad of other industrial and non-industrial tasks, most faster than humans, yet all the while working safely along side them.

Boston Consulting Group (BCG)

Gaining Robotic Advantage, June 2017, 13 pages, free

BCG suggests that business executives be aware of ways robots are changing the global business landscape and think and act now. They see robotics-fueled changes coming in retail, logistics, transportation, healthcare, food processing, mining and agriculture.

BCG cites the following drivers:

  • Private investment in the robotic space has continued to amaze with exponential year-over-year funding curves and sensational billion dollar acquisitions.
  • Prices continue to fall on robots, sensors, CPUs and communications while capabilities continue to increase.
  • Robot programming is being transformed by easier interfaces, GUIs and ROS.
  • The prospect of a self-driving vehicles industry disrupting transportation is propelling a talent grab and strategic acquisitions by competing international players with deep pockets.
  • 40% of robotic startups have been in the consumer sector and will soon augment humans in high-touch fields such as health and elder care.

 BCG also cites the following example of paying close attention to gain advantage:

“Amazon gained a first-mover advantage in 2012 when it bought Kiva Systems, which makes robots for warehouses. Once a Kiva customer, Amazon acquired the robot maker to improve the productivity and margins of its network of warehouses and fulfillment centers. The move helped Amazon maintain its low costs and expand its rapid delivery capabilities. It took five years for a Kiva alternative to hit the market. By then, Amazon had a jump on its rivals and had developed an experienced robotics team, giving the company a sustainable edge.”

Tractica

Robotics Market Forecast – June 2017, 26 pages, $4,200
Drones for Commercial Applications – June 2017, 196 pages, $4,200
AI for Automotive Applications – May 2017, 63 pages, $4,200
Consumer Robotics – May 2017, 130 pages, $4,200

The key story is that industrial robotics, the traditional pillar of the robotics market, dominated by Japanese and European robotics manufacturers, has given way to non-industrial robot categories like personal assistant robots, UAVs, and autonomous vehicles, with the epicenter shifting toward Silicon Valley, which is now becoming a hotbed for artificial intelligence (AI), a set of technologies that are, in turn, driving a lot of the most significant advancements in robotics. Consequently, Tractica forecasts that the global robotics market will grow rapidly between 2016 and 2022, with revenue from unit sales of industrial and non-industrial robots rising from $31 billion in 2016 to $237.3 billion by 2022.  The market intelligence firm anticipates that most of this growth will be driven by non-industrial robots.

Tractica is headquartered in Boulder and analyzes global market trends and applications for robotics and related automation technologies within consumer, enterprise, and industrial marketplaces and related industries.

General Research Reports

  • Global autonomous mobile robots marketJune 2017, 95 pages, TechNavio, $2,500
    TechNavio forecasts that the global autonomous mobile robots market will grow at a CAGR of more than 14% through 2021.
  • Global underwater exploration robotsJune 2017, 70 pages, TechNavio, $3,500
    TechNavio forecasts that the global underwater exploration robots market will grow at a CAGR of 13.92 % during the period 2017-2021.
  • Household vacuum cleaners market, March 2017, 134 pages, Global Market Insights, $4,500
    Global Market Insights forecasts that household vacuum cleaners market size will surpass $17.5 billion by 2024 and global shipments are estimated to exceed 130 million units by 2024, albeit at a low 3.0% CAGR. Robotic vacuums show a slightly higher growth CAGR.
  • Global unmanned surface vehicle market, June 2017, Value Market Research, $3,950
    Value Market Research analyzed drivers (security and mapping) versus restraints such as AUVs and ROVs and made their forecasts for the period 2017-2023.
  • Innovations in Robotics, Sensor Platforms, Block Chain, and Artificial Intelligence for Homeland Security, May 2017, Frost & Sullivan, $6,950
    This Frost & Sullivan report covers recent developments such as co-bots for surveillance applications, airborne sensor platforms for border security, blockchain tech, AI as first responder, and tech for detecting nuclear threats.
  • Top technologies in advanced manufacturing and automation, April 2017, Frost & Sullivan, $4,950
    This Frost & Sullivan report focuses on exoskeletons, metal and nano 3D printing, co-bots and agile robots – all of which are in the top 10 technologies covered.
  • Mobile robotics market, December 2016, 110 pages, Zion Market Research, $4,199
    Global mobile robotics market will reach $18.8 billion by end of 2021, growing at a CAGR of slightly above 13.0% between 2017 and 2021.
  • Unmanned surface vehicle (USV) market, May 2017, MarketsandMarkets, $5,650
    MarketsandMarkets forecasts the unmanned surface vehicle (USV) market to grow from $470.1 Million in 2017 to $938.5 Million by 2022, at a CAGR of 14.83%.

Agricultural Research Reports

  • Global agricultural robots market, May 2017, 70 pages, TechNavio, $2,500
    Forecasts the global agricultural robots market will grow steadily at a CAGR of close to 18% through 2021.
  • Agriculture robots market, June 2017, TMR Research, $3,716
    Robots are poised to replace agricultural hands. They can pluck fruits, sow and reap crops, and milk cows. They carry out the tasks much faster and with a great degree of accuracy. This coupled with mandates on higher minimum pay being levied in most countries, have spelt good news for the global market for agriculture robots.
  • Agricultural Robots, December 2016, 225 pages, Tractica, $4,200
    Forecasts that shipments of agricultural robots will increase from 32,000 units in 2016 to 594,000 units annually in 2024 and that the market is expected to reach $74.1 billion in annual revenue by 2024. Report, done in conjunction with The Robot Report, profiles over 165 companies involved in developing robotics for the industry.

Second edition of Springer Handbook of Robotics

The Second Edition of the award-winning Springer Handbook of Robotics edited by Bruno Siciliano and Oussama Khatib has recently been published. The contents of the first edition have been restructured to achieve four main objectives: the enlargement of foundational topics for robotics, the enlightenment of design of various types of robotic systems, the extension of the treatment on robots moving in the environment, and the enrichment of advanced robotics applications. Most previous chapters have been revised, fifteen new chapters have been introduced on emerging topics, and a new generation of authors have joined the handbook’s team. Like for the first edition, a truly interdisciplinary approach has been pursued in line with the expansion of robotics across the boundaries with related disciplines. Again, the authors have been asked to step outside of their comfort zone, as the Editorial Board have teamed up authors who never worked together before.

No doubt one of the most innovative elements is the inclusion of multimedia content to leverage the valuable written content inside the book. Under the editorial leadership of Torsten Kröger, a web portal has been created to host the Multimedia Extension of the book, which serves as a quick one-stop shop for more than 700 videos associated with the specific chapters. In addition, video tutorials have been created for each of the seven parts of the book, which benefit everyone from PhD students to seasoned robotics experts who have been in the field for years. A special video related to the contents of the first chapter shows the journey of robotics with the latest and coolest developments in the last 15 years. As publishing explores new interactive technologies, an App has been made available in the Google/IOS stores to introduce an additional multimedia layer to the reader’s experience. With the app, readers can use the camera on their smartphone or tablet, hold it to a page containing one or more special icons, and produce an augmented reality on the screen, watching videos as they read along the book.


For more information on the book, please visit the Springer Handbook website.

The Multimedia Portal offers free access to more than 700 accompanying videos. In addition, a Multimedia App is now downloadable: AppStore and GooglePlay for smartphones and tablets, allowing you to easily access multimedia content while reading the book.

New Horizon 2020 robotics projects, 2016: CYBERLEGs++

In 2016, the European Union co-funded 17 new robotics projects from the Horizon 2020 Framework Programme for research and innovation. 16 of these resulted from the robotics work programme, and 1 project resulted from the Societal Challenges part of Horizon 2020. The robotics work programme implements the robotics strategy developed by SPARC, the Public-Private Partnership for Robotics in Europe (see the Strategic Research Agenda). 

Every week, euRobotics will publish a video interview with a project, so that you can find out more about their activities. This week features CYBERLEGs++: The CYBERnetic LowEr-Limb CoGnitive Ortho-prosthesis Plus Plus.

Objectives

The goal of CYBERLEGs++ is to validate the technical and economic viability of the powered robotic ortho-prosthesis developed within the FP7-ICT-CYBERLEGs project. The aim is to enhance/restore the mobility of transfemoral amputees and to enable them to perform locomotion tasks such as ground-level walking, walking up and down slopes, climbing/descending stairs, standing up, sitting down and turning in scenarios of real life. Restored mobility will allow amputees to perform physical activity thus counteracting physical decline and improving the overall health status and quality of life.


Expected Impact

By demonstrating in an operational environment (TRL=7) – from both the technical and economic viability view point – a modular robotics technology for healthcare, with the ultimate goal of fostering its market exploitation CYBERLEGs Plus Pus will have an impact on:

Society: CLs++ technology will contribute to increase the mobility of dysvascular amputees, and, more generally, of disabled persons with mild lower-limb impairments;
Science and technology: CLs++ will further advance the hardware and software modules of the ortho-prosthesis developed within the FP7 CYBERLEGs project and validate its efficacy through a multi-centre clinical study;
Market: CLs++ will foster the market exploitation of high-tech robotic systems and thus will promote the growth of both a robotics SME and a large healthcare company.

Partners
SCUOLA SUPERIORE SANT’ANNA (SSSA)
UNIVERSITÉ CATHOLIQUE DE LOUVAIN (UCL)
VRIJE UNIVERSITEIT BRUSSEL (VUB)
UNIVERZA V LJUBLJANI (UL)
FONDAZIONE DON CARLO GNOCCHI (FDG)
ÖSSUR (OSS)
IUVO S.R.L. (IUVO)

Coordinator
Prof. Nicola Vitiello, The BioRobotics Institute
Scuola Superiore Sant’Anna, Pisa, Italy
nicola.vitiello@santannapisa.it

Project website
www.cyberlegs.org

Watch all EU-projects videos

If you enjoyed reading this article, you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Rapid outdoor/indoor 3D mapping with a Husky UGV

by Nicholas Charron

The need for fast, accurate 3D mapping solutions has quickly become a reality for many industries wanting to adopt new technologies in AI and automation. New applications requiring these 3D mapping platforms include surveillance, mining, automated measurement & inspection, construction management & decommissioning, and photo-realistic rendering. Here at Clearpath Robotics, we decided to team up with Mandala Robotics to show how easily you can implement 3D mapping on a Clearpath robot.

3D Mapping Overview

3D mapping on a mobile robot requires Simultaneous Localization and Mapping (SLAM), for which there are many different solutions available. Localization can be achieved by fusing many different types of pose estimates. Pose estimation can be done using combinations of GPS measurements, wheel encoders, inertial measurement units, 2D or 3D scan registration, optical flow, visual feature tracking and others techniques. Mapping can be done simultaneously using the lidars and cameras that are used for scan registration and for visual position tracking, respectively. This allows a mobile robot to track its position while creating a map of the environment. Choosing which SLAM solution to use is highly dependent on the application and the environment to be mapped. Although many 3D SLAM software packages exist and cannot all be discussed here, there are few 3D mapping hardware platforms that offer full end-to-end 3D reconstruction on a mobile platform.

Existing 3D Mapping Platforms

We will briefly highlight some more popular alternatives of commercialized 3D mapping platforms that have one or many lidars, and in some cases optical cameras, for point cloud data collection. It is important to note that there are two ways to collect a 3D point cloud using lidars:

1. Use a 3D lidar which consists of one device with multiple stacked horizontally laser beams
2. Tilt or rotate a 2D lidar to get 3D coverage

Tilting of a 2D lidar typically refers to back-and-forth rotating of the lidar about its horizontal plane, while rotating usually refers to continuous 360 degree rotation of a vertically or horizontally mounted lidar.

Example 3D Mapping Platforms: 1. MultiSense SL (Left) by Carnegie Robotics, 2. 3DLS-K (Middle) by Fraunhofer IAIS Institute, 3. Cartographer Backpack (Right) by Google.

1. MultiSense SL

The MultiSense SL was developed by Carnegie Robotics and provides a compact and lightweight 3D data collection unit for researchers. The unit has a tilting Hokuyo 2D lidar, a stereo camera, LED lights, and is pre-calibrated for the user. This allows for the generation of coloured point clouds. This platform comes with a full software development kit (SDK), open source ROS software, and is the sensor of choice for the DARPA Robotics Challenge for humanoid robots.

2. 3DLS-K

The 3DLS-K is a dual-tilting unit made by Fraunhofer IAIS Institute with the option of using SICK LMS-200 or LMS-291 lidars. Fraunhofer IAIS also offers other configurations with continuously rotating 2D SICK or Hokuyo lidars. These systems allow for the collection of non-coloured point clouds. With the purchase of these units, a full application program interface (API) is available for configuring the system and collecting data.

3. Cartographer Backpack

The Cartographer Backpack is a mapping unit with two static Hokuyo lidars (one horizontal and one vertical) and an on-board computer. Google released cartographer software as an open source library for performing 3D mapping with multiple possible sensor configurations. The Cartographer Backpack is an example of a possible configuration to map with this software. Cartographer allows for integration of multiple 2D lidars, 3D lidars, IMU and cameras, and is also fully supported in ROS. Datasets are also publicly available for those who want to see mapping results in ROS.

Mandala Mapping – System Overview

Thanks to the team at Mandala Robotics, we got our hands on one of their 3D mapping units to try some mapping on our own. This unit consists of a mount for a rotating vertical lidar, a fixed horizontal lidar, as well as an onboard computer with an Nvidia GeForce GTX 1050 Ti GPU. The horizontal lidar allows for the implementation of 2D scan registration as well as 2D mapping and obstacle avoidance. The vertical rotating lidar is used for acquiring the 3D point cloud data. In our implementation, real-time SLAM was performed solely using 3D scan registration (more on this later) specifically programmed for full utilization of the onboard GPU. The software used to implement this mapping can be found on the mandala-mapping github repository.

Scan registration is the process of combining (or stitching) together two subsequent point clouds (either in 2D or 3D) to estimate the change in pose between the scans. This results in motion estimates to be used in SLAM and also allows a new point cloud to be added to an existing in order to build a map. This process is achieved by running iterative closest point (ICP) between the two subsequent scans. ICP performs a closest neighbour search to match all points from the reference scan to a point on the new scan. Subsequently, optimization is performed to find rotation and translation matrices that minimise the distance between the closest neighbours. By iterating this process, the result converges to the true rotation and translation that the robot underwent between the two scans. This is the process that was used for 3D mapping in the following demo.

Mandala Robotics has also released additional examples of GPU computing tasks useful for robotics and SLAM. These examples can be found here.

Mandala Mapping Results

The following video shows some of our results from mapping areas within the Clearpath office, lab and parking lot. The datasets collected for this video can be downloaded here.

The Mandala Mapping software was very easy to get up and running for someone with basic knowledge in ROS. There is one launch file which runs the Husky base software as well as the 3D mapping. Initiating each scan can be done by sending a simple scan request message to the mapping node, or by pressing one button on the joystick used to drive the Husky. Furthermore, with a little more ROS knowledge, it is easy to incorporate autonomy into the 3D mapping. Our forked repository shows how a short C++ script can be written to enable constant scan intervals while navigating in a straight line. Alternatively, one could easily incorporate 2D SLAM such as gmapping together with the move_base package in order to give specific scanning goals within a map.

Why use Mandala Mapping on your robot?

If you are looking for a quick and easy way to collect 3D point clouds, with the versatility to use multiple lidar types, then this system is a great choice. The hardware work involved with setting up the unit is minimal and well documented, and it is preconfigured to work with your Clearpath Husky. Therefore, you can be up and running with ROS in a few days! The mapping is done in real time, with only a little lag time as your point cloud size grows, and it allows you to visualize your map as you drive.

The downside to this system, compared to the MultiSense SL for example, is that you cannot yet get a coloured point cloud since no cameras have been integrated into this system. However, Mandala Robotics is currently in the beta testing stage for a similar system with an additional 360 degree camera. This system uses the Ladybug5 and will allow RGB colour to be mapped to each of the point cloud elements. Keep an eye out for a future Clearpath blogs in case we get our hands on one of these systems! All things considered, the Mandala Mapping kit offers a great alternative to the other units aforementioned and fills many of the gaps in functionality of these systems.

The post Rapid Outdoor/Indoor 3D Mapping with a Husky UGV appeared first on Clearpath Robotics.

Baidu’s self-driving tech plans revealed

In the race to develop self-driving technology, Chinese Internet giant Baidu unveiled its 50+ partners in an open source development program, revised its timeline for introducing autonomous driving capabilities on open city roads, described the Project Apollo consortium and its goals, and declared Apollo to be the ‘Android of the autonomous driving industry’.

At a developer's conference last week in Beijing, Baidu described its plans and timetable for its self-driving car technology. It will start test-driving in restricted environments immediately, before gradually introducing fully autonomous driving capabilities on highways and open city roads by 2020. Baidu's goal is to get those vehicles on the roads in China, the world's biggest auto market, with the hope that the same technology, embedded in exported Chinese vehicles, can then conquer the United States. To do so, Baidu has compiled a list of cooperative partners, a consortium of 50+ public and private entities, and named it Apollo, after NASA's massive Apollo moon-landing program. 

Project Apollo

The program is making its autonomous car software open source in the same way that Google released its Android operating system for smartphones. By encouraging companies to build upon the system and share their results, it hopes to overtake rivals such as Google/Waymo, Tencent, Alibaba and others researching self-driving technology. 

MIT Technology Review provided a description of the open source Apollo project:

The Apollo platform consists of a core software stack, a number of cloud services, and self-driving vehicle hardware such as GPS, cameras, lidar, and radar.

The software currently available to outside developers is relatively simple: it can record the behavior of a car being driven by a person and then play that back in autonomous mode. This November, the company plans to release perception capabilities that will allow Apollo cars to identify objects in their vicinity. This will be followed by planning and localization capabilities, and a driver interface.

The cloud services being developed by Baidu include mapping services, a simulation platform, a security framework, and Baidu’s DuerOS voice-interface technology.

Members of the project include Chinese automakers Chery, Dongfeng Motor, Foton, Nio, Yiqi and FAW Group. Tier 1 members include Continental, Bosch, Intel, Nvidia, Microsoft and Velodyne. Other partners include Chinese universities, governmental agencies, Autonomous Stuff, TomTom, Grab and Ford. The full list of members can be seen here.

Quoting from Bloomberg News regarding the business aspect of Project Apollo:

China has set a goal for 10 to 20 percent of vehicles to be highly autonomous by 2025, and for 10 percent of cars to be fully self-driving in 2030. Didi Chuxing, the ride-sharing app that beat Uber in China, is working on its own product, as are several local automakers. It’s too early to tell which will ultimately succeed though Baidu’s partnership approach is sound, said Marie Sun, an analyst with Morningstar Investment Service.

“This type of technology needs cooperation between software and hardware from auto-manufacturers so it’s not just Baidu that can lead this,” she said. If Baidu spins off the car unit, “in the longer term, Baidu should maintain a major shareholder position so they can lead the growth of the business.”

Baidu and Apollo have a significant advantage over Google's Waymo: Baidu has a presence in the United States, whereas Alphabet has none in China because Google closed down its search site in 2010 rather than give in to China's internet censorship.

Strategic Issue

According to the Financial Times, “autonomous vehicles pose an existential threat [to global car manufacturers]. Instead of owning cars, consumers in the driverless age will simply summon a robotic transportation service to their door. One venture capitalist says auto executives have come to him saying they know they are “screwed”, but just want to know when it will happen.” 

This desperation has prompted a string of big acquisitions and joint ventures amongst competing providers including those in China. Citing just a few:

  • Last year GM paid $1bn for Cruise, a self-driving car start-up.
  • Uber paid $680m for Otto, an autonomous trucking company that was less than a year old.
  • In March, Intel spent $15bn to buy Israel’s Mobileye, which makes self-driving sensors and software.
  • Baidu acquired Raven Tech, an Amazon Echo competitor; 8i, an augmented reality hologram startup; Kitt, a conversational language engine; and XPerception, a vision systems developer.
  • Tencent invested in mapping provider Here and acquired 5% of Tesla.
  • Alibaba announced that it is partnering with Chinese Big 4 carmaker SAIC in their self-driving effort.

China Network

Baidu’s research team in Silicon Valley is pivotal to their goals. Baidu was one of the first of the Chinese companies to set up in Silicon Valley, initially to tap into SV's talent pool. Today it is the center of a “China network” of almost three dozen firms, through investments, acquisitions and partnerships. 

Baidu is rapidly moving forward from the SV center: 

  • It formed a self-driving car sub-unit in April which now employs more than 100 researchers and engineers. 
  • It partnered with chipmaker Nvidia.
  • It acquired vision systems startup XPerception.
  • It has begun testing its autonomous vehicles in China and California. 

Regarding XPerception, Gartner research analyst Michael Ramsey said in a CNBC interview:

“XPerception has expertise in processing and identifying images, an important part of the sensing for autonomous vehicles. The purchase may help push Baidu closer to the leaders, but it is just one piece.”

XPerception is just one of many Baidu puzzle pieces intended to bring talent and intellectual property to the Apollo project. It acquired Raven Tech and Kitt AI to gain conversational transaction processing. It acquired 8i, an augmented reality hologram startup, to add AR — which many expect to be crucial in future cars — to the project. And it suggested that the acquisition spree will continue as needed.

Bottom Line

China has set a goal for 10 to 20 percent of vehicles to be highly autonomous by 2025, and for 10 percent of cars to be fully self-driving in 2030 and Baidu wants to provide the technology to get those vehicles on the roads in China with the hope that the same technology, embedded in exported Chinese vehicles, can then conquer the United States. It seems well poised to do so.

Robots Podcast #238: Midwest Speech and Language Days 2017 Posters, with Michael White, Dmitriy Dligach and Denis Newman-Griffiths



In this episode, MeiXing Dong conducts interviews at the 2017 Midwest Speech and Language Days workshop in Chicago. She talks with Michael White of Ohio State University about question interpretation in a dialogue system; Dmitriy Dligach of Loyola University Chicago about extracting patient timelines from doctor’s notes; and Denis Newman-Griffiths of Ohio State University about connecting words and phrases to relevant medical topics.

Links

Udacity Robotics video series: Interview with Nick Kohut from Dash Robotics


Mike Salem from Udacity’s Robotics Nanodegree is hosting a series of interviews with professional roboticists as part of their free online material.

This week we’re featuring Mike’s interview with Nick Kohut, Co-Founder and CEO of Dash Robotics.

Nick is a former robotics postdoc at Stanford and received his PhD in Control Systems from UC Berkeley. At Dash Robotics, Nick handles team-building and project management.

You can find all the interviews here. We’ll be posting one per week on Robohub.

China’s e-commerce dynamo JD makes deliveries via mobile robots

China’s second-biggest e-commerce company, JD.com (Alibaba is first), is testing mobile robots to make deliveries to its customers, and imagining a future with fully unmanned logistics systems.

 Story idea and images courtesy of RoboticsToday.com.au.

On the last day of a two-week-long shopping bonanza that recorded sales of around $13 billion, some deliveries were made using mobile robots designed by JD. It’s the first time that the company has used delivery robots in the field. The bots delivered packages to multiple Beijing university campuses such as Tsinghua University and Renmin University. 

JD has been testing delivery robots since November last year. At that time, the cost of a single robot was almost $88,000.

They have been working on lowering the cost and increasing the capabilities since then. The white, four-wheeled UGVs can carry five packages at once and travel 13 miles on a charge. They can climb up a 25° incline and find the shortest route from warehouse to destination.

Once it reaches its destination, the robot sends a text message to notify the recipient of the delivery. Users can accept the delivery through face-recognition technology or by using a code.

The UGVs now cost $7,300 per robotic unit which JD figures can reduce delivery costs from less than $1 for a human delivery to about 20 cents for a robot delivery.

JD is also testing the world’s largest drone-delivery network, including flying drones carrying products weighing as much as 2,000 pounds.

“Our logistics systems can be unmanned and 100% automated in 5 to 8 years,” said Liu Qiangdong, JD’s chairman.

Simulated car demo using ROS Kinetic and Gazebo 8

By Tully Foote

We are excited to show off a simulation of a Prius in Mcity using ROS Kinetic and Gazebo 8. ROS enabled the simulation to be developed faster by using existing software and libraries. The vehicle’s throttle, brake, steering, and transmission are controlled by publishing to a ROS topic. All sensor data is published using ROS, and can be visualized with RViz.

We leveraged Gazebo’s capabilities to incorporate existing models and sensors.
The world contains a new model of Mcity and a freeway interchange. There are also models from the gazebo models repository including dumpsters, traffic cones, and a gas station. On the vehicle itself there is a 16 beam lidar on the roof, 8 ultrasonic sensors, 4 cameras, and 2 planar lidar.

The simulation is open source and available at on GitHub at osrf/car_demo. Try it out by installing nvidia-docker and pulling “osrf/car_demo” from Docker Hub. More information about building and running is available in the README in the source repository.

Talking Machines: Bias variance dilemma for humans and the arm farm, with Jeff Dean

In episode four of season three Neil introduces us to the ideas behind the bias variance dilemma (and how how we can think about it in our daily lives). Plus, we answer a listener question about how to make sure your neural networks don’t get fooled. Our guest for this episode is Jeff Dean,  Google Senior Fellow in the Research Group, where he leads the Google Brain project. We talk about a closet full of robot arms (the arm farm!), image recognition for diabetic retinopathy, and equality in data and the community.

Fun Fact: Geoff Hinton’s distant relative invented the word tesseract. (How cool is that. Seriously.)


If you enjoyed this episode, you may also want to listen to:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Page 423 of 433
1 421 422 423 424 425 433