Archive 19.11.2017

Page 2 of 3
1 2 3

Bossa Nova raises $17.5 million for shelf-scanning mobile robots

Bossa Nova Robotics, a Silicon Valley developer of autonomous service robots for the retail industry, announced the close of a $17.5 million Series B funding round led by Paxion Capital Partners and participation by Intel Capital, WRV Capital, Lucas Venture Group (LVG), and Cota Capital. This round brings Bossa Nova’s total funding to date to $41.7 million.

Bossa Nova helps large scale stores automate the collection and analysis of on-shelf inventory data by driving their sensor-laden mobile robots autonomously through aisles, navigating safely among customers and store associates. The robots capture images of store shelves and use AI to analyze the data and calculate the status of each product including location, price, and out-of-stocks which is then aggregated and delivered to management in the form of a restock action plan.

They recently began testing their robots and analytic services in 50 Walmart stores across the US. They first deployed their autonomous robots in retail stores in 2013 and have since registered more than 710 miles and 2,350 hours of autonomous inventory scanning, capturing more than 80 million product images.

“We have worked closely with Bossa Nova to help ensure this technology, which is designed to capture and share in-store data with our associates in near real time, works in our unique store environment,” said John Crecelius, vice president of central operations at Walmart. “This is meant to be a tool that helps our associates quickly identify where they can make the biggest difference for our customers.”

CMU grads launched Bossa Nova Robotics in Pittsburgh as a designer of robotic toys. In 2009 they launched two new products: Penbo, a fuzzy penguin-like robot that sang, danced, cuddled and communicated with her baby in their own Penbo language; and Prime-8, a gorilla-like loud fast-moving robot for boys. In 2011 and 2012 they changed direction: they sold off the toy business and focused on developing a mobile robot based on CMU’s ballbot technology. Later they converted to normal casters and mobility methods and spent their energies on developing camera, vision and AI analytics software to produce their latest round of shelf-scanning mobile robots.

Efficient data acquisition in MATLAB: Streaming HD video in real-time

Digital Background. Secure data concept. Digital flow, symbolizing data protection and digital technologies

The acquisition and processing of a video stream can be very computationally expensive. Typical image processing applications split the work across multiple threads, one acquiring the images, and another one running the actual algorithms. In MATLAB we can get multi-threading by interfacing with other languages, but there is a significant cost associated with exchanging data across the resulting language barrier. In this blog post, we compare different approaches for getting data through MATLAB’s Java interface, and we show how to acquire high-resolution video streams in real-time and with low overhead.

Motivation

For our booth at ICRA 2014, we put together a demo system in MATLAB that used stereo vision for tracking colored bean bags, and a robot arm to pick them up. We used two IP cameras that streamed H.264 video over RTSP. While developing the image processing and robot control parts worked as expected, it proved to be a challenge to acquire images from both video streams fast enough to be useful.

Since we did not want to switch to another language, we decided to develop a small library for acquiring video streams. The project was later open sourced as HebiCam.

Technical Background

In order to save bandwidth most IP cameras compress video before sending it over the network. Since the resulting decoding step can be computationally expensive, it is common practice to move the acquisition to a separate thread in order to reduce the load on the main processing thread.

Unfortunately, doing this in MATLAB requires some workarounds due to the language’s single threaded nature, i.e., background threads need to run in another language. Out of the box, there are two supported interfaces: MEX for calling C/C++ code, and the Java Interface for calling Java code.

While both interfaces have strengths and weaknesses, practically all use cases can be solved using either one. For this project, we chose the Java interface in order to simplify cross-platform development and the deployment of binaries. The diagram below shows an overview of the resulting system.

stereocam matlab.svg

Figure 1. System overview for a stereo vision setup

Starting background threads and getting the video stream into Java was relatively straightforward. We used the JavaCV library, which is a Java wrapper around OpenCV and FFMpeg that includes pre-compiled native binaries for all major platforms. However, passing the acquired image data from Java into MATLAB turned out to be more challenging.

The Java interface automatically converts between Java and MATLAB types by following a set of rules. This makes it much simpler to develop for than the MEX interface, but it does cause additional overhead when calling Java functions. Most of the time this overhead is negligible. However, for certain types of data, such as large and multi-dimensional matrices, the default rules are very inefficient and can become prohibitively expensive. For example, a 1080x1920x3 MATLAB image matrix gets translated to a byte[1080][1920][3] in Java, which means that there is a separate array object for every single pixel in the image.

As an additional complication, MATLAB stores image data in a different memory layout than most other libraries (e.g. OpenCV’s Mat or Java’s BufferedImage). While pixels are commonly stored in row-major order ([width][height][channels]), MATLAB stores images transposed and in column-major order ([channels][width][height]). For example, if the Red-Green-Blue pixels of a BufferedImage would be laid out as [RGB][RGB][RGB]…​, the same image would be laid out as [RRR…​][GGG…​][BBB…​] in MATLAB. Depending on the resolution this conversion can become fairly expensive.

In order to process images at a frame rate of 30 fps in real-time, the total time budget of the main MATLAB thread is 33ms per cycle. Thus, the acquisition overhead imposed on the main thread needs to be sufficiently low, i.e., a low number of milliseconds, to leave enough time for the actual processing.

Data Translation

We benchmarked five different ways to get image data from Java into MATLAB and compared their respective overhead on the main MATLAB thread. We omitted overhead incurred by background threads because it had no effect on the time budget available for image processing.

The full benchmark code is available here.

1. Default 3D Array

By default MATLAB image matrices convert to byte[height][width][channels] Java arrays. However, when converting back to MATLAB there are some additional problems:

  • byte gets converted to int8 instead of uint8, resulting in an invalid image matrix

  • changing the type back to uint8 is somewhat messy because the uint8(matrix) cast sets all negative values to zero, and the alternative typecast(matrix, 'uint8') only works on vectors

Thus, converting the data to a valid image matrix still requires several operations.

% (1) Get matrix from byte[height][width][channels]
data = getRawFormat3d(this.javaConverter);
[height,width,channels] = size(data);

% (2) Reshape matrix to vector
vector = reshape(data, width * height * channels, 1);

% (3) Cast int8 data to uint8
vector = typecast(vector, 'uint8');

% (4) Reshape vector back to original shape
image = reshape(vector, height, width, channels);

2. Compressed 1D Array

A common approach to move image data across distributed components (e.g. ROS) is to encode the individual images using MJPEG compression. Doing this within a single process is obviously wasteful, but we included it because it is common practice in many distributed systems. Since MATLAB did not offer a way to decompress jpeg images in memory, we needed to save the compressed data to a file located on a RAM disk.

% (1) Get compressed data from byte[]
data = getJpegData(this.javaConverter);

% (2) Save as jpeg file
fileID = fopen('tmp.jpg','w+');
fwrite(fileID, data, 'int8');
fclose(fileID);

% (3) Read jpeg file
image = imread('tmp.jpg');

3. Java Layout as 1D Pixel Array

Another approach is to copy the pixel array of Java’s BufferedImage and to reshape the memory using MATLAB. This is also the accepted answer for How can I convert a Java Image object to a MATLAB image matrix?.

% (1) Get data from byte[] and cast to correct type
data = getJavaPixelFormat1d(this.javaConverter);
data = typecast(data, 'uint8');
[h,w,c] = size(this.matlabImage); % get dim info

 % (2) Reshape matrix for indexing 
pixelsData = reshape (data, 3 , w, h);

 % (3) Transpose and convert from row major to col major format (RGB case) 
image = cat (3 , ...
    transpose(reshape (pixelsData(3 , :, :), w, h)), ...
    transpose(reshape (pixelsData(2 , :, :), w, h)), ...
    transpose(reshape (pixelsData(1 , :, :), w, h)));

4. MATLAB Layout as 1D Pixel Array

The fourth approach also copies a single pixel array, but this time the pixels are already stored in the MATLAB convention.

 % (1) Get data from byte[] and cast to correct type 
data = getMatlabPixelFormat1d(this.javaConverter);
[h,w,c] = size (this.matlabImage);   % get dim info 
vector = typecast(data, 'uint8' );

 % (2) Interpret pre-laid out memory as matrix 
image = reshape (vector,h,w,c);

Note that the most efficient way we found for converting the memory layout on the Java side was to use OpenCV’s split and transpose functions. The code can be found in MatlabImageConverterBGR and MatlabImageConverterGrayscale.

5. MATLAB Layout as Shared Memory

The fifth approach is the same as the fourth with the difference that the Java translation layer is bypassed entirely by using shared memory via memmapfile. Shared memory is typically used for inter-process communication, but it can also be used within a single process. Running within the same process also simplifies synchronization since MATLAB can access Java locks.

 % (1) Lock memory 
lock(this.javaObj);

 % (2) Force a copy of the data 
image = this.memFile.Data.pixels * 1 ;

 % (3) Unlock memory 
unlock(this.javaObj);

Note that the code could be interrupted (ctrl+c) at any line, so the locking mechanism would need to be able to recover from bad states, or the unlocking would need to be guaranteed by using a destructor or onCleanup.

The multiplication by one forces a copy of the data. This is necessary because under-the-hood memmapfile only returns a reference to the underlying memory.

Results

All benchmarks were run in MATLAB 2017b on an Intel NUC6I7KYK. The performance was measured using MATLAB’s timeit function. The background color of each cell in the result tables represents a rough classification of the overhead on the main MATLAB thread.

Table 1. Color classification
Color Overhead At 30 FPS

Green

<10%

<3.3 ms

Yellow

<50%

<16.5 ms

Orange

<100%

<33.3 ms

Red

>100%

>33.3 ms

The two tables below show the results for converting color (RGB) images as well as grayscale images. All measurements are in milliseconds.

table performance.svg

Figure 2. Conversion overhead on the MATLAB thread in [ms]

The results show that the default conversion, as well as jpeg compression, are essentially non-starters for color images. For grayscale images, the default conversion works significantly better due to the fact that the data is stored in a much more efficient 2D array (byte[height][width]), and that there is no need to re-order pixels by color. Unfortunately, we currently don’t have a good explanation for the ~10x cost increase (rather than ~4x) between 1080p and 4K grayscale. The behavior was the same across computers and various different memory settings.

When copying the backing array of a BufferedImage we can see another significant performance increase due to the data being stored in a single contiguous array. At this point much of the overhead comes from re-ordering pixels, so by doing the conversion beforehand, we can get another 2-3x improvement.

Lastly, although accessing shared memory in combination with the locking overhead results in a slightly higher fixed cost, the copying itself is significantly cheaper, resulting in another 2-3x speedup for high-resolution images. Overall, going through shared memory scales very well and would even allow streaming of 4K color images from two cameras simultaneously.

Final Notes

Our main takeaway was that although MATLAB’s Java interface can be inefficient for certain cases, there are simple workarounds that can remove most bottlenecks. The most important rule is to avoid converting to and from large multi-dimensional matrices whenever possible.

Another insight was that shared-memory provides a very efficient way to transfer large amounts of data to and from MATLAB. We also found it useful for inter-process communication between multiple MATLAB instances. For example, one instance can track a target while another instance can use its output for real-time control. This is useful for avoiding coupling a fast control loop to the (usually lower) frame rate of a camera or sensor.

As for our initial motivation, after creating HebiCam we were able to develop and reliably run the entire demo in MATLAB. The video below shows the setup using old-generation S-Series actuators.

The race to own the autonomous super highway: Digging deeper into Broadcom’s offer to buy Qualcomm

Governor Andrew Cuomo of the State of New York declared last month that New York City will join 13 other states in testing self-driving cars: “Autonomous vehicles have the potential to save time and save lives, and we are proud to be working with GM and Cruise on the future of this exciting new technology.” For General Motors, this represents a major milestone in the development of its Cruise software, since the the knowledge gained on Manhattan’s busy streets will be invaluable in accelerating its deep learning technology. In the spirit of one-upmanship, Waymo went one step further by declaring this week that it will be the first car company in the world to ferry passengers completely autonomously (without human engineers safeguarding the wheel).

As unmanned systems are speeding ahead toward consumer adoption, one challenge that Cruise, Waymo and others may counter within the busy canyons of urban centers is the loss of Global Positioning System (GPS) satellite data. Robots require a complex suite of coordinating data systems that bounce between orbiting satellites to provide positioning and communication links to accurately navigate our world. The only thing that is certain, as competing technologies and standards wrestle in this nascent marketplace for adoption, is the critical connection between Earth and space. Based upon the estimated growth of autonomous systems on the road, in the workplace and home in the next ten years, most unmanned systems rely heavily on the ability of commercial space providers to fulfill their boastful mission plans to launch thousands of new satellites into an already crowded lower earth orbit.

As shown by the chart below, the entry of autonomous systems will drive an explosion of data communications between terrestrial machines and space, leading to tens of thousands of new rocket launches over the next two decades. In a study done by Northern Sky Research (NSR) it projected that by 2023 there will be an estimated 5.8 million satellite Machine-to-Machine (M2M) and Internet Of Things (IOT) connections to approximately 50 billion global Internet-connected devices. In order to meet this demand, satellite providers are racing to the launch pads and raising billions in capital, even before firing up the rockets. As an example, OneWeb, which has raised more than $1.5 billion from Softbank, Qualcomm and Airbus, plans to launch its first 10 satellite constellations in 2018 which will eventually grow to 650 in the next decade. OneWeb competes with Space X, Boeing, Immarsat, Iridium, and others in deploying new satellites offering high-speed communication spectrums, such as Ku Band (12 GHz Wireless), K Band (18 GHz – 27 GHz), Ka Band (27 GHz – 40 GHz) and V Band (40 GHz – 75 GHz). The opening of new higher frequency spectrums is critical to support the explosion of increased data demands. Today there are more than 250 million cars on the road in the United States and in the future these cars will connect to the Internet, transmitting 200 million lines of code or 50 billion pings of data to safely and reliably transport passengers to their destinations everyday.

Screen Shot 2017-11-09 at 9.16.15 PMSatellites already provide millions of GPS coordinates for connected systems. However, the accuracy of GPS has been off  by as many as 5 meters, which in a fully autonomous world could mean the difference between life and death. Chip manufacturer Broadcom aims to reduce the error margin to 30 centimeters. According to a press release this summer, Broadcom’s technology works better in concrete canyons like New York which have plagued Uber drivers for years with wrong fare destinations. Using new L5 satellite signals, the chips are able to calculate receptions between points at a fast rate with lower power consumption (see diagram). Manuel del ­Castillo of Broadcom explained, “Up to now there haven’t been enough L5 satellites in orbit.” Currently there are approximately 30 L5 satellites in orbit. However, del ­Castillo suggests that could be enough to begin shipping the new chip next year, “[Even in a city’s] narrow window of sky you can see six or seven, which is pretty good. So now is the right moment to launch.”

Leading roboticist and business leader in this space, David Bruemmer explained to me this week that GPS is inherently deficient, even with L5 satellite data. In addition, current autonomous systems rely too heavily on vision systems like LIDAR and cameras, which can only see what is in front of them but not around the corner. In Bruemmer’s opinion the only solution to provide the greatest amount of coverage is one that combines vision, GPS with point-to-point communications such as Ultra Wide Band and RF beacons. Bruemmer’s company Adaptive Motion Group (AMG) is a leading innovator in this space. Ultimately, in order for AMG to efficiently work with unmanned systems it requires a communication pipeline that is wide enough to transmit space signals within a network of terrestrial high-speed frequencies.

AMG is not the only company focused on utilizing a wide breadth of data points to accurately steer robotic systems. Sandy Lobenstein, Vice President of Toyota Connected Services, explains that the Japanese car maker has been working with the antenna satellite company Kymeta to expand the data connectivity bandwidth in preparation for Toyota’s autonomous future. “We just announced a consortium with companies such as Intel and a few others to find ways to use edge computing and create standards around managing data flow in and out of vehicles with the cellphone industries or the hardware industries. Working with a company like Kymeta helps us find ways to use their technology to handle larger amounts of data and make use of large amounts of bandwidth that is available through satellite,” said Lobenstein.

sat

In a world of fully autonomous vehicles the road of the next decade truly will become an information superhighway – with data streams flowing down from thousands of satellites to receiving towers littered across the horizon, bouncing between radio masts, antennas and cars (Vehicle to Vehicle [V2V] and Vehicle to Infrastructure [V2X] communications). Last week, Broadcom ratcheted up its autonomous vehicle business by announcing the largest tech-deal ever to acquire Qualcomm for $103 billion. The acquisition would enable Broadcom to dominate both aspects of autonomous communications that rely heavily on satellite uplinks, GPS and vehicle communications. Broadcom CEO Hock Tan said, “This complementary transaction will position the combined company as a global communications leader with an impressive portfolio of technologies and products.” Days earlier, Tan attend a White House press conference with President Trump boasting of plans to move Broadcom’s corporate office back to the United States, a very timely move as federal regulators will have to approve the Broadcom/Qualcomm merger.

The merger news comes months after Intel acquired Israeli computer vision company, Mobileye for $15 billion. In addition to Intel, Broadcom also competes with Nvidia which is leading the charge to enable artificial intelligence on the road. Last month, Nvidia CEO Jensen Huang predicted that “It will take no more than 4 years to have fully autonomous cars on the road. How long it takes for the vast majority of cars on the road to become that, it really just depends.” Nvidia, which traditionally has been a computer graphics chip company, has invested heavily in developing AI chips for automated systems. Huang shares his vision, “There are many tasks in companies that can be automated… the productivity of society will go up.”

Industry consolidation represents the current state of the autonomous car race as chip makers volley to own the next generation of wireless communications. Tomorrow’s 5G mobile networks promise a tenfold increase in data streams for phones, cars, drones, industrial robots and smart city infrastructure. Researchers estimate that the number of Internet-connected chips could grow from 12 million to 90 million by the end of this year; making connectivity as ubiquitous as gasoline for connected cars. Karl Ackerman, analyst at Cowen & Co., said it best, “[Broadcom] would basically own the majority of the high-end components in the smart phone market and they would have a very significant influence on 5G standards, which are paramount as you think about autonomous vehicles and connected factories.”

The topic of autonomous transportation and smart cities will be featured at the next RobotLabNYC event series on November 29th @ 6pm with New York Times best selling author Dan Burstein/Millennium Technology Value Partners and Rhonda Binda of Venture Smarter, formerly with the Obama Administration – RSVP today.

Battery safety and fire handling

Lithium battery safety is an important issue as there are more and more reports of fires and explosions. Fires have been reported in everything from cell phones to airplanes to robots.

If you don’t know why we need to discuss this, or even if you do know, watch this clip or click here.

I am not a fire expert. This post is based on things I have heard and some basic research. Contact your local fire department for advice specific to your situation. I had very little success contacting my local fire department about this, hopefully you will have more luck.

Preventing Problems

1. Use a proper charger for your battery type and voltage. This will help prevent overcharging. In many cases lithium-ion batteries catch fire when the chargers keep dumping charge into the batteries after the maximum voltage has been reached.

2. Use a battery management system (BMS) when building battery packs with multiple cells. A BMS will monitor the voltage of each cell and halt charging when any cell reaches the maximum voltage. Cheap BMS’s will stop all charging when any cell reaches that maximum voltage. Fancier/better BMS’s can individually charge each cell to help keep the battery pack balanced. A balanced pack is good since each cell will be a similar voltage for optimal battery pack performance. The fancy BMS’s can also often detect if a single cell is reading wrong. There have been cases of a BMS’s working properly but a single cell going bad which confuses the BMS; and yields a fire/explosion.

3. Only charge batteries in designated areas. A designated area should be non combustible. For example cement, sand, cinder block and metal boxes are not uncommon to use for charging areas. For smaller cells you can purchase fire containment bags designed to put the charging battery in.
lipo lithiom ion battery charging bag

In addition the area where you charge the batteries should have good ventilation.

I have heard that on the Boeing Dreamliner, part of the solve for their batteries catching fire on planes, was to make sure that the metal enclosure that the batteries were in could withstand the heat of a battery fire. And also to make sure that in the event of a fire the fumes would vent outside the aircraft and not into the cabin.

Dreamliner airline battery fire

Dreamliner battery pack before and after fire. [SOURCE]

4. Avoid short circuiting the batteries. This can cause a thermal runoff which will also cause a fire/explosion. When I say avoid short circuiting the battery you are probably thinking of just touching the positive and negative leads together. While that is an example you need to think of other methods as well. For example puncturing a cell (such as with a drill bit or a screw driver) or compressing the cells, can cause a short-circuit with a resulting thermal runoff.

5. Don’t leave batteries unattended when charging. This will let people be available in case of a problem. However, as you saw in the video above, you might want to keep a distance from the battery in case there is a catastrophic event with flames shooting out from the battery pack.

6. Store batteries within the specs of the battery. Usually that means room temperature and out of direct sunlight (to avoid overheating).

7. Training of personnel for handling batteries, charging batteries, and what to do in the event of a fire. Having people trained in what to do can be important so that they stay safe. For example, without training people might not realize how bad the fumes are. Also make sure people know where the fire pull stations are and where the extinguishers are.

Handling Fires

1. There are 2 primary issues with a lithium fire. The fire itself and the gases released. This means that even if you think you can safely extinguish the fire, you need to keep in mind the fumes and possibly clear away from the fire.

2a. Lithium batteries which are usually in the form of small non-rechargeable batteries (such as in a watch) in theory require a class D fire extinguisher. However most people do not have one available. As such, for the most part you need to just let it burn itself out (it is good that the batteries are usually small). You can use a standard class ABC fire extinguisher to prevent the spread of the fire. Avoid using water on the lithium battery itself since the lithium and water can react violently.

2b. Lithium-ion batteries (including LiFePO4) that are used on many robots, are often larger and rechargeable. For these batteries there is not a lot of actual lithium metal in the battery, so you can use water or a class ABC fire extinguisher. You do not use a class D extinguisher with these batteries.

With both of these types of fires, there is a good chance that you will not be able to extinguish the it. If you can safety be in the area your primary goal is to allow the battery to burn in a controlled and safe manner. If possible try to get the battery outside and on a surface that is not combustible. As a reminder lithium-ion fires are very hot and flames can shoot out from various places unexpectedly; you need to be careful and only do what you can do safety. If you have a battery with multiple cells it is not uncommon for each cell to catch fire separately. So you might see the flames die down, then shortly after another cell catches fire, and then another; as the cells cascade and catch fire.

PASS fire extinguisher

A quick reminder about how to use a fire extinguisher. Remember first you Pull the pin, then you Aim at the base of the fire, then you Squeeze the handle, followed by Sweeping back and forth at the base of the fire. [SOURCE]

3. In many cases the batteries are in an enclosure where if you spray the robots with an extinguisher you will not even reach the batteries. In this case your priority is your safety (from fire and fumes), followed by preventing the fire from spreading. To prevent the fire from spreading you need to make sure all combustible material is away from the robot. If possible get the battery pack outside.

In firefighting school a common question is: Who is the most important person? To which the response is, me!

4. If charging the battery, try to unplug the battery charger from wall. Again only if you can do this safely.


I hope you found the above useful. I am not an expert on lithium batteries or fire safety. Consult with your local experts and fire authorities. I am writing this post due to the lack of information in the robotics community about battery safety.

As said by Wired “you know what they say: With great power comes great responsibility.”.


Thank you Jeff (I think he said I should call him Merlin) for some help with this topic.

Robocar/LIDAR news and video of the Apple car

Robocar news is fast and furious these days. I certainly don’t cover it all, but will point to stories that have some significance. Plus, to tease you, here’s a clip from my 4K video of the new Apple car that you’ll find at the end of this post.

Lidar acquisitions

There are many startups in the Lidar space. Recently, Ford’s Argo division purchased Princeton Lightwave a small LIDAR firm which was developing 1.5 micron lidars. 1.5 micron lidars include Waymo’s own proprietary unit (subject of the lawsuit with Uber) as well as those from Luminar and a few others. Most other lidar units work in the 900nm band of near infrared.

Near infrared lasers and optics can be based on silicon, and silicon can be cheap because there is so much experience and capacity devoted to making it. 1.5 micron light is not picked up by silicon, but it’s also not focused by the lens of the eye. That means that you can send out a lot more power and still not hurt the eye, but your detectors are harder to make. That extra power lets you see to 300m, while 900nm band lidars have trouble with black objects beyond 100m.

100m is enough for urban driving, but is not a comfortable range for higher speeds. Radar senses far but has low resolution. Thus the desire for 1.5 micron units.

GM/Cruise also bought Strobe, a small lidar firm with a very different technology. Their technology is in the 900nm band, but they are working on ways to steer the beam without moving mirrors the way Velodyne and others do. (Quanergy, in which I have stock, also is developing solid state units, as are several others.) They have not published but there is speculation on how Strobe’s unit works.

What’s interesting is that these players have decided, like Waymo, Uber and others, that they should own their own lidar technology, rather than just buy it from suppliers. This means one of two things:

  • They don’t think anybody out there can supply them with the LIDAR they need — which is what motivated Waymo to build their own, or
  • They think their in-house unit will offer them a competitive advantage

On the surface, neither of these should be true. Suppliers are all working on making lidars because most people think they will be needed. And folks are working on both 900nm and 1.5 micron units, eager to sell. It’s less clear if any of these units will be significantly better than the ones the independent suppliers are building. That’s what is needed to get a competitive edge. The unit needs to be longer range, better resolution, better field of view or more reliable than supplier units. It’s not clear why that will be, but nobody has released solid specs.

What shouldn’t matter is that they can make it cheaper in-house, especially for those working on taxi service. First of all, it’s very rare you can get something cheaper by buying the entire company. Secondly, it’s not important to make it much cheaper for the first few years of production. Nobody is going to win or lose based on whether their taxi unit costs a few thousand more dollars to make.

So there must be something else that is not revealed driving these acquisitions.

Velodyne, which pioneered the lidar industry for self-driving cars, just announced their new
128 line lidar with twice the planes and 4x the resolution of the giant “KFC Bucket” unit found on most early self-driving car prototypes.

The $75,000 64-laser Velodyne kick-started the industry, but it’s big and expensive. This new one will surely also be expensive but is smaller. In a world where many are working with the 16 and 32 laser units, the main purpose of this unit, I think, will be for those who want to develop with the sensor of the future.

Doing your R&D with high-end gear is often a wise choice. In a few years, the high resolution gear will be cheaper and ready for production, and you want to be ready for that. At the same time, it’s not yet clear how much 128 lines gains over 64. It’s not easy to identify objects in lidar, but you don’t absolutely have to so most people have not worried too much about it.

Pioneer, the Japanese electronics maker, has also developed a new lidar. Instead of trying to steer a laser entirely with solid state techniques, theirs uses MEMS mirrors, similar to those in DLP projectors. This is effectively solid state even though the mirrors actually move. I’ve seen many lidar prototypes that use such mirrors but for some reason they have not gone into production. It is a reasonably mature technology, and can be quite low cost.

More acquisitions and investment

Delphi recently bought Nutonomy, the Singapore/MIT based self-driving car startup. I remember visiting them a few time s in Singapore and finding them to be not very far along compared to others. Far enough along to fetch $400M. Delphi is generally one of the better-thinking tier one automotive suppliers and now it can go full-stack with this purchase.

Of course, since most automakers have their own full stack efforts underway, just how many customers will the full-stack tier one suppliers sell to? They may also be betting that some automakers will fail in their projects, and need to come to Delphi, Bosch or others for rescue.

Another big investment is Baidu’s “Project Apollo.” This “moonshot” is going to invest around $1.5B in self-driving ventures, and support it with open source tools. They have lots of partners, so it’s something to watch.

Other players push forward

Navya was the first company to sell a self-driving car for consumer use. Now their new vehicle is out. In addition, yesterday in Las Vegas, they started a pilot and within 2 hours had a collision. Talk about bad luck — Navya has been running vehicles for years now without such problems. It was a truck that backed into the Navya vehicle, and the truck driver’s fault, but some are faulting it because all it did was stop dead when it saw the truck coming. It did not back out of the way, though it could have. Nobody was hurt.

Aurora, the startup created by Chris Urmson after he left Waymo, has shown off its test vehicles. No surprise, they look very much like the designs of Waymo’s early vehicles, a roof rack with a Velodyne 64 laser unit on top. The team at Aurora is top notch, so expect more.

Apple’s cars are finally out and about. Back in September I passed one and took a video of it.

You can see it’s loaded with sensors. No fewer than 12 of the Velodyne 16 laser pucks and many more to boot. Apple is surely following that philosophy of designing for future hardware.

Robots’ two-pronged role in Alibaba’s $25.3 billion Singles’ Day sale

The Singles’ Day Shopping Festival held each year on November 11th is just like Black Friday, Mothers’ Day or any other sales-oriented psuedo-holiday, but bigger and more extravagant. Starting in 2009 in China as a university campus event, Singles Day has now reached all over China and to more than 180 countries.

After 24 hours of non-stop online marketing, including a star-studded Gala with film star Nicole Kidman and American rapper Pharrell Williams, the day (also known as Bachelors Day or 11/11 because the number “1” is symbolic of an individual that is alone) concluded with a sales total of ¥168 billion ($25.3 billion) on the Tmall and Taobao e-commerce networks (both belong to the Alibaba Group (NASDAQ:BABA)). Other e-commerce platforms including Amazon’s Chinese site Amazon.cn, JD.com, VIP.com and Netease’s shopping site you.163.com also participated in the 11/11 holiday with additional sales.

  • Singles Day sales reported by Alibaba were $5.8 billion in 2013, $9.3 billion in 2014, $14.3 billion in 2015, $17.8 billion in 2016, and $25.3 billion for 2017.
  • In a story reported by DealStreetAsia, JD.com said that their sales for Singles’ Day – and its 10-day run-up – reached ¥127.1 billion ($19.1 billion), a 50% jump from a year ago. JD started its sales event on Nov. 1st to reduce delivery bottlenecks and to give users more time to make their purchasing decisions.

Muyuan Li, a researcher for The Robot Report, said: “Chinese people love shopping on e-commerce websites because sellers offer merchandise 20% – 60% cheaper than in the stores, particularly on 11/11. Sites and consumer items are marketed as a game and people love to play. For example, if you deposit or purchase coupons in advance, you can get a better deal. Customers compare prices on manmanbuy.com or smzdm.com and paste product page urls into xitie.com to see the historical prices for those products. There are lotteries to win Red Envelope “cash” which are really credits that can be applied to your Singles Day shopping carts, and contests to beat other shoppers to the check out.”

Robotics-related products sold in great quantities on Singles Day. ECOVACS and other brands of robot vacuum cleaners were big sellers as were DJI and other camera drones and all sorts of robotic toys and home assistants.

The process:

Although 11/11 was a great day for people buying robotic products, it was also a significant day for the new technologies of handling those products: 1.5 billion Alibaba parcels will transverse China over the next week delivering those purchases to Chinese consumers while all those packed and shipped items will have been manufactured, boxed, cased, temporarily stored and then unskidded, unboxed, picked and packed, sorted for shipment and shipped in all manner of ways.

New technology is part of how this phenomenal day is possible: robots, automation, vision systems, navigation systems, transportation systems and 100,000 upgraded smart stores (where people viewed items but then bought online) – all were part of the mechanical underside of this day – and foretell how this is going to play forward. There were also hundreds of thousands of human workers involved in the process.

Material handling:

Here are a few of the robotics-related Chinese warehousing systems vendors that are helping move the massive volume of 11/11’s 1.5 billion packages:

Alibaba and Jack Ma:

Jack Ma, is the founder and executive chairman of Alibaba Group and one of Asia’s richest men, with a net worth of $47.8 billion, as of November 2017 according to Forbes. His story is a rags to riches one with an ‘Aha’ moment, where, on a trip to the US, he tried to search for general information about China and found none. So he and his friend created a website with a rudimentary linkage system to other websites. They named their company “China Yellow Pages.”

Quoting from Wikipedia: “In 1999 he founded Alibaba, a China-based business-to-business marketplace site in his apartment with a group of 18 friends. In October 1999 and January 2000, Alibaba twice won a total of a $25 million foreign venture capital investment. Ma wanted to improve the global e-commerce system and from 2003 he founded Taobao Marketplace, Alipay, Ali Mama and Lynx. After the rapid rise of Taobao, eBay offered to purchase the company. Ma rejected their offer, instead garnering support from Yahoo co-founder Jerry Yang with a $1 billion investment. In September 2014 Alibaba became one of the most valuable tech companies in the world after raising $25 billion, the largest initial public offering in US financial history. Ma now serves as executive chairman of Alibaba Group, which is a holding company with nine major subsidiaries: Alibaba.com, Taobao Marketplace, Tmall, eTao, Alibaba Cloud Computing, Juhuasuan, 1688.com, AliExpress.com and Alipay.”

Ma was recently quoted at the Bloomberg Global Business Forum as saying that people should stop looking to manufacturing to drive economic growth. That message ties into his and Alibaba’s overall business plan to be involved in all aspects of the online e-commerce world.

Robohub Podcast #247: ANYmal: A Ruggedized Quadrupedal Robot, with Marco Hutter



In this interview, Audrow Nash interviews Marco Hutter, Assistant Professor for Robotic Systems at ETH Zürich, about a quadrupedal robot designed for autonomous operation in challenging environments, called ANYmal. Hutter discusses ANYmal’s design, the ARGOS oil and gas rig inspection challenge, and the advantages and complexities of quadrupedal locomotion. 

Here is a video showing some of the highlights of ANYmal at the ARGOS Challenge.

 

Here is a video that shows some of the motions ANYmal is capable of.

 

 

Marco Hutter

Marco Hutter is assistant professor for Robotic Systems at ETH Zürich since 2015 and Branco Weiss Fellow since 2014. Before this, he was deputy director and group leader in the field of legged robotics at the Autonomous Systems Lab at ETH Zürich. After studying mechanical engineering, he conducted his doctoral degree in robotics at ETH with focus on design, actuation, and control of dynamic legged robotic systems. Beside his commitment within the National Centre of Competence in Research (NCCR) Digital Fabrication since October 2015 Hutter is part of the NCCR robotics and coordinator of several research projects, industrial collaborations, and international competitions (e.g. ARGOS challenge) that target the application of high-mobile autonomous vehicles in challenging environments such as for search and rescue, industrial inspection, or construction operation. His research interests lie in the development of novel machines and actuation concepts together with the underlying control, planning, and optimization algorithms for locomotion and manipulation.

 

Links

Waymo deploys with no human safety driver oversight

Credit:Waymo

In a major milestone for robocars, Waymo has announced they will deploy in Phoenix with no human safety drivers behind the wheel. Until now, almost all robocars out there have only gone out on public streets with a trained human driver behind the wheel, ready to take over at any sign of trouble. Waymo and a few others have done short demonstrations with no safety driver, but now an actual pilot, providing service to beta-testing members of the public, will operate without human supervision.

https://youtube.com/watch?v=aaOB-ErYq6Y%3Frel%3D0

This is a big deal, and indicates Waymo’s internal testing is showing a very strong safety record. The last time they published numbers, they had gone 83,000 miles between “required interventions.” While in safety driver training, we are told to intervene at any sign of a problem, these interventions are tested in simulation to find out what would have happened if there had been no intervention. If the car would have done the right thing, it’s not a required intervention.

Waymo must have built their number up a great deal from there. People have an accident that is reported to insurance about ever 250,000 miles, and to police every 500,000 miles. Injury accidents happen every 1.2M miles, and fatalities every 80M miles. In Waymo’s testing, where they got hit a lot by other drivers, they discovered that there are “dings” about every 100,000 miles that don’t get reported to police or insurance.

People have argued about how good you have to be to put a robocar on the road. You need to be better than all those numbers. I will guess that Waymo has gotten the “ding” number up above 500,000 miles — which is close to a full human lifetime of driving. Since they have only driven 3.5M miles they can’t make real-world estimates of the frequency of injuries and certainly not of fatalities, but they can make predictions. And their numbers have convinced them, and the Alphabet management, that it’s time to deploy.

Congratulations to all the team.

They did this not just with real world testing, but building a sophisticated simulator to test zillions of different situations, and a real world test track where they could test 20,000 different scenarios. And for this pilot they are putting it out on the calm and easy streets of Phoenix, probably one of the easiest places to drive in the world. Together, that gives the confidence to put “civilians” in the cars with no human to catch an error. Nothing will be perfect, but this vehicle should outperform a human driver. The open question will be how the courts treat that when the first problem actually does happen. Their test record suggests that may be a while; let us hope it is.

Where do we go from here?
This pilot should give pause to those who have said that robocars are a decade or more away, but it also doesn’t mean they are full here today. Phoenix was chosen because it’s a much easier target than some places. Nice, wide streets in a regular grid. Flat terrain. Long blocks. Easy weather with no snow and little rain. Lower numbers of pedestrians and cyclists. Driving there does not let you drive the next day in Boston.

But neither does it mean it takes you decades to go from Phoenix to Boston, or even to Delhi. As Waymo proves things out in this pilot, first they will prove the safety and other technical issues. Then they will start proving out business models. Once they do that, prepare for a land rush as they leap to other cities to stake the first claim and the first-mover advantage (if there is one, of course.) And expect others to do the same, but later than Waymo, because as this demonstrates, Waymo is seriously far ahead of the other players. It took Waymo 8 years to get to this, with lots of money and probably the best team out there. But it’s always faster to do something the 2nd time. Soon another pilot from another company will arise, and when it proves itself, the land rush will really begin.

Smart soft robotics for stroke rehabilitation

The culmination of work by Alistair C. McConnell (lead-researcher) through his PhD and the SOPHIA team, the Soft Orthotic Physiotherapy Hand Interactive Aid (SOPHIA) forms the foundation for our future research into Soft Robotic rehabilitation systems.

Through Alistair’s research, it became apparent that there was a lack of stroke rehabilitation systems for the hand, that could be used in a domestic environment and monitor both physical and neural progress. Alistair conducted a thorough review of the literature to fully explore the state of the art, and apparent lack of this type of rehabilitation system. This review investigated the development of both Exoskeleton and End-Effector based systems to examine how this point was reached and what gaps and issues still occurred.
From this review and discussions with physiotherapists, we developed an idea for a brain machine controlled soft robotic system. The “Soft Orthotic Physiotherapy Hand Interactive Aid” (SOPHIA) needed to provide rehabilitation aid in two forms, passive and active:
• Passive rehabilitation, where the subject performs their exercises, and this is reflected in a 3D representation on a screen, and all the data is stored for analysis.
• Active rehabilitation, where the subject attempts to open their hand and if the full extension is not achieved in a designated time, the system provides the extra force needed.

Through a grant from the Newton Fund we developed the SOPHIA system, which consists of a soft robotic exoskeleton with a set of pneunets actuators providing the force for the fingers of a hand to be fully extended, and an electropneumatic control system containing the required diaphragm pumps, valves and sensors in a compact modular unit.
The inclusion of a Brain Machine Interface (BMI) allowed us to use motor imagery techniques, where the electroencephalogram signal from the subject could be used as a trigger for the extension motion of the hand, augmenting the active rehabilitation.
We designed the system to accept input from two different BMI devices, and compared a wired, high-end BMI with a low-cost, wireless BMI. By applying machine-learning approaches we were able to narrow down the differences in these two input systems, and our approach enabled the inexpensive system to perform at the same-level as the high-end system.

You can find further information on the SOPHIA system and the current state of the art in robotic devices and brain-machine interfaces for hand rehabilitation in our recent journal publications.

SOPHIA: Soft Orthotic Physiotherapy Hand Interactive Aid: 
https://www.frontiersin.org/articles/10.3389/fmech.2017.00003/full

Robotic devices and brain-machine interfaces for hand rehabilitation post-stroke: 
https://www.ncbi.nlm.nih.gov/pubmed/28597018

How will robots and AI change our way of life in 2030?

Sydney Padua’s Ada Lovelace is a continual inspiration.

At #WebSummit 2017, I was part of a panel on what the future will bring in 2030 with John Vickers from Blue Abyss, Jacques Van den Broek from Randstad and Stewart Rogers from Venture Beat. John talked about how technology will allow humans to explore amazing new places. Jacques demonstrated how humans were more complex than our most sophisticated AI and thus would be an integral part of any advances. And I focused on how the current technological changes would look amplified over a 10–12 year period.

After all, 2030 isn’t that far off, so we have already invented all the tech, but it isn’t widespread yet and we’re only guessing what changes will come about with the network effects. As William Gibson said, “The future is here, it’s just not evenly distributed yet.”

What worries me is that right now we’re worried about robots taking jobs. And yet the jobs at most risk are the ones in which humans are treated most like machines. So I say, bring on the robots! But what also worries me is that the current trend towards a gig economy and micro transactions powered by AI, ubiquitous connectivity and soon blockchain, will mean that we turn individuals back into machines. Just part of a giant economic network, working in fragments of gigs not on projects or jobs. I think that this inherent ‘replaceability’ is ultimately inhumane.

When people say they want jobs, they really mean they want a living wage and a rewarding occupation. So let’s give the robots the gigs.

Here’s the talk: “Life in 2030”
It’s morning, the house gently blends real light tones and a selection of bird song to wake me up. Then my retro ‘Teasmade’ serves tea and the wall changes from sunrise to news channels and my calendar for today. I ask the house to see if my daughter’s awake and moving. And to remind her that the clothes only clean themselves if they’re in the cupboard, not on the floor.

Affordable ‘Pick up’ bots are still no good at picking up clothing although they’re good at toys. In the kitchen I spend a while recalibrating the house farm. I’m enough of a geek to put the time into growing legumes and broccoli. It’s pretty automatic to grow leafy greens and berries, but larger fruits and veg are tricky. And only total hippies spend the time on home grown vat meat or meat substitutes.

I’m proud of how energy neutral our lifestyle is, although humans always seem to need more electricity than we can produce. We still have our own car, which shuttles my daughter to school in remote operated semi autonomous mode where control is distributed between the car, the road network and a dedicated 5 star operator. Statistically it’s the safest form of transport, and she has the comfort of traveling in her own family vehicle.

Whereas I travel in efficiency mode — getting whatever vehicle is nearby heading to my destination. I usually pick the quiet setting. I don’t mind sharing my ride with other people or drivers but I like to work or think as I travel.

I work in a creative collective — we provide services and we built the collective around shared interests like historical punk rock and farming. Branding our business or building our network isn’t as important as it used to be because our business algorithms adjust our marketing strategies and bid on potential jobs faster than we could.

The collective allows us to have better health and social plans than the usual gig economy. Some services, like healthcare or manufacturing still have to have a lot of infrastructure, but most information services can cowork or remote work and our biggest business expense is data subscriptions.

This is the utopic future. For the poor, it doesn’t look as good. Rewind..
It’s morning. I’m on Basic Income, so to get my morning data & calendar I have to listen to 5 ads and submit 5 feedbacks. Everyone in our family has to do some, but I do extra so that I get parental supervision privileges and can veto some of the kid’s surveys.

We can’t afford to modify the house to generate electricity, so we can’t afford decent house farms. I try to grow things the old way, in dirt, but we don’t have automation and if I’m busy we lose produce through lack of water or bugs or something. Everyone can afford Soylent though. And if I’ve got some cash we can splurge on junk food, like burgers or pizza.

My youngest still goes to a community school meetup but the older kids homeschool themselves on the public school system. It’s supposed to be a personalized AI for them but we still have to select which traditional value package we subscribed to.

I’m already running late for work. I see that I have a real assortment of jobs in my queue. At least I’ll be getting out of the house driving people around for a while, but I’ve got to finish more product feedbacks while I drive and be on call for remote customer support. Plus I need to do all the paperwork for my DNA to be used on another trial or maybe a commercial product. Still, that’s how you get health care — you contribute your cells to the health system.

We also go bug catching, where you scrape little pieces of lichen, or dog poo, or insects into the samplers, anything that you think might be new to the databases. One of my friends hit jackpot last year when their sample was licensed as a super new psychoactive and she got residuals.

I can’t afford to go online shopping so I’ll have to go to a mall this weekend. Physical shopping is so exhausting. There are holo ads and robots everywhere spamming you for feedback and getting in your face. You might have some privacy at home but in public, everyone can eye track you, emote you and push ads. It’s on every screen and following you with friendly robots.

It’s tiring having to participate all the time. Plus you have to take selfies and foodies and feedback and survey and share and emote. It used to be ok doing it with a group of friends but now that I have kids ….
Robots and AI make many things better although we don’t always notice it much. But they also make it easier to optimize us and turn us into data, not people.

Robust distributed decision-making in robot swarms

Credit: Jerry Wright

Reaching an optimal shared decision in a distributed way is a key aspect of many multi-agent and swarm robotic applications. As humans, we often have to come to some conclusions about the current state of the world so that we can make informed decisions and then act in a way that will achieve some desired state of the world. Of course, expecting every person to have perfect, up-to-date knowledge about the current state of the world is unrealistic, and so we often rely on the beliefs and experiences of others to inform our own beliefs.

We see this too in nature, where honey bees must choose between a large number of potential nesting sites in order to select the best one. When a current hive grows too large, the majority of bees must choose a new site to relocate to via a process called “swarming” – a problem that can be generalised to choosing the best of a given number of choices. To do this, bees rely on a combination of their own experiences and the experiences of others in the hive in order to reach an agreement about which is the best site. We can learn from solutions found in nature to develop our own models and apply these to swarms of robots. By having pairs of robots interact and reach agreements at an individual level, we can distribute the decision-making process across the entire swarm.

Decentralised algorithms such as these are often considered to be more robust than their centralised counterparts because there is no single point of failure, but this is rarely put to the test. Robustness is crucial in large robot swarms, given that individual robots are often made with cheap and reliable hardware to keep costs down. Robustness is also important in scenarios which might be critical to the protection or preservation of life such as in search and rescue operations. In this context we aim to introduce an alternative model for distributed decision-making in large robot swarms and examine its robustness to the presence of malfunctioning robots, as well as compare it to an existing model: the weighted voter model (Valentini et al., 2014).

kilobot_img_annotated
Kilobots are small, low cost robots used to study swarm robotics. They each contain 2 motors for movement and an RGB LED and IR transmitter for communication.

In this work we consider a simplified version of the weighted voter model where robots (specifically kilobots) move around randomly in a 1.2m^2 arena and at any point in time are in one of two primary states: either signalling, or updating. Those in the signalling state are signalling their current belief that either “choice A is the best” or “choice B is the best”, and they continue to do so for a length of time proportional to the quality of the choice i.e. in our experiments, choice A has a quality of 9 and choice B has a quality of 7, and so those believing that choice A is the best choice will signal for a longer duration than those believing that choice B is the best. This naturally creates a bias in the swarm where those signalling for choice A will do so for longer than those signalling for choice B, and this will inevitably affect the updating robots. Those in the updating state will select a signalling robot at random from their local neighbours, provided that they are within their communication radius (10 cm limit for the Kilobots), and adopt that robot’s belief.

We compare this model to our “three-valued model”. Instead of immediately adopting the signalling robot’s belief, robots instead follow these rule: Provided that a belief in choice A corresponds to a truth state of 1 and choice B a truth state of 0, then we introduce a third truth state of 1/2 representing “undecided” or “unknown” as an intermediate state. If the two robots conflict in their beliefs such that one believes choice A (1) to be the best and the other choice B (0) then the updating robot adopts a new belief state of 1/2. If one robot has a stronger belief, either in choice A (1) or choice B (0), and the other is undecided (1/2), then the stronger belief is preserved. This approach eventually leads to the swarm successfully reaching consensus about which is the best choice. Furthermore, the swarm chooses the best of the two choices, which is A in this case.

We then go on to adapt our model so that a percentage of robots is malfunctioning, meaning they adopt a random belief state (either 1 or 0) instead of updating their belief based on other robots, before continuing to signal for that random choice. We run experiments both in simulation and on a physical robot swarms of kilobots.

Results

In the figure, we show results as a trajectory of the Kilobots signalling for either choice A or choice B.

kilobot
Experiments involve a population of 400 kilobots where, on average, 10% of the swarm is malfunctioning (signalling for a random choice). We can see that the three-valued model eventually reaches 100% of the (functional) kilobots in the swarm signalling for choice A in just over 4 minutes. This model outperforms the weighted voter model which, while quicker to come to a decision, achieves below 90% on average. The inclusion of our “undecided” state slows convergence, but in doing so provides a means for robots to avoid adopting the belief of malfunctioning robots when they are in disagreement. For robotic systems where malfunction is a possibility, it therefore seems preferable to choose the three-valued model.

In the video, we show a time-lapse of experiments performed on 400 Kilobots where blue lights represent those signalling for choice A, and red those for choice B. Those in the intermediate state (1/2) may be coloured either red or blue. The green Kilobots are performing as if malfunctioning, such that they adopt a random belief and then signal for that belief.

In the future, we would like to consider ways to close the gap between the three-valued model and the weighted voter model in terms of decision-making speed while maintaining improved performance in the presence of malfunctioning robots. We also intend to consider different distributed algorithms for decision-making which take account of numerous beliefs being signalled within a robot’s radius of communication. So far, both models considered in this work only consider a single robot’s belief while updating, but there exist other models, such as majority rule models and models for opinion-pooling, which take account of the beliefs of many robots. Finally, we intend to investigate models that scale well with the number of choices that the swarm must choose between. Currently, most models only consider a small number of choices, but the models discussed here require discrete signalling periods which would need to increase as the number of choices increases.


This article was originally posted on EngMaths.org.

For more information, read Crosscombe, M., Lawry, J., Hauert, S., & Homer, M. (2017). Robust Distributed Decision-Making in Robot Swarms: Exploiting a Third Truth State. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017)

1000 local events expected during the European Robotics Week 2017

The importance of robotics for Europe’s regions will be the focus of a week-long celebration of robotics taking place around Europe on 17–27 November 2017. The European Robotics Week 2017 (ERW2017) is expected to include more than 1000 local events for the public — open days by factories and research laboratories, school visits by robots, talks by experts and robot competitions are just some of the events.

Robotics is increasingly important in education. “Since 2011, we have been asking schools throughout all regions of Europe to demonstrate robotics education at all levels,” says Reinhard Lafrenz, the Secretary General of euRobotics, the association for robotics researchers and industry which organises ERW2017. “I am delighted that many skilled teachers and enthusiastic local organisers have taken up this challenge and we have seen huge success in participation, with over 1000 events expected to be organised in all regions of Europe this year.”

All over Europe, ERW2017 will show the public how robots can support our daily lives, for example, by helping during surgery and, in the future, by providing support and care for people with disabilities, or how robots can monitor the environment. Robotics is also an essential part of EU-funded digital innovation hubs and could, in the future, contribute to the creation of new jobs.

Some of the highlights of the ERW events announced so far are:

  • in Italy, the School of Robotics will webstream an event at the KUKA robot company;
  • in Bosnia-Herzegovina, there will be dozens of SPARKreactors League robotics competitions;
  • in Latvia and Iceland, there will be ERW events for the first time;
  • in Spain, over 200 events are being organised in schools, more than half of them in Catalonia;
  • in Germany, nearly 40 events will include First Lego League competitions for young people, and an education day at the Fraunhofer IPA research organisation and a humanoid robot workshop at Hamburg University of Technology.

The ERW2017 Central Event organised in Brussels will see the “Robots Discovery” exhibition hosted by the European Committee of the Regions 20-23 November), where robotics experts from 30 European and regionally funded projects will outline how their work can impact our society. The exhibiting projects will show robots in healthcare helping during surgery or providing support for elder care, helping students develop digital skills, monitoring the environment and applying agricultural chemicals with precision and less waste, or helping save lives after disasters.

Other events organised in Belgium include the Eurospace Center which will run robotics classes for children (24 November), and the demonstration in Brussels of the self-driving bus of the Finnish Metropolia University of Applied Sciences (22-23 November). ERW2017 will overlap with the last week of the month-long InQbet hackathon on innovation in robotics and artificial intelligence.

euRobotics has recorded 400 000 visitors across Europe to events at the six previous ERWs.

Find your local ERW activities here and follow #ERW2017 on twitter.

Funding trends: self-driving dreams coming true


Participants and startups in the emerging self-driving vehicles industry (components, systems, trucks, cars and buses) have been at it for over almost 60 years. The pace accelerated in 2004, 2005 and 2007 when DARPA sponsored long-distance competitions for driverless cars, and then again in 2009 when Uber began its ride-hailing system.

As the prospects that self-driving ride-hailing fleets, vehicles, systems and associated AI would soon be a reality, startups, fundings, mergers and acquisitions have followed reaching a peak in 2017. Thus far in 2017 more than 55 companies and startups offering everything from solid state distancing sensors to ride-share fleets and mapping systems – plus five strategic acquisitions – raised over $28.2 billion!

2017 Investments Trends: Self-driving

Listed below are month-by-month recaps of self-driving-related fundings and acquisitions as reported by The Robot Report. The two massive fundings by the SoftBank Vision Fund in May, the Intel acquisition of Movidius in March and Ford’s acquisition of Argo in February are extraordinary. Nevertheless, pulling out those billion dollar transactions still shows that $2.4 billion found its way to more than 55 companies. [The trend continues in November with Optimus Ride and Ceres Imaging both raising Series A money.]

Click on the month for funding details, links and profiles for each of the companies.

  • October – $957.24 million:
    • Mapbox-$164M, Element AI-$105M, Horizon Robotics-$100M, Innoviz Technologies-$8, Momenta AI-$46, Built Robotics-$15, Blickfeld-$4.25, nuTonomy was acquired by Delphi Automotive-$450M, and Strobe was acquired by General Motors-unknown amount.
  • September – $275 million:
    • LeddarTech-$101M, Innoviz Technologies-$65M, JingChi-$52M, Five AI-$35M, Drive AI-$15M, Ushr Inc-$10M and Metawave-$7M.
  • August – $70 million:
    • Oryx Vision-$50M and TuSimple-$20M.
  • July – $413 million:
    • Nauto-$159M, Brain Corp-$114M, Momenta AI-$46, Autotalk-$40M, Slamtec-$22M, Embark-$15M, Xometry-$15M and Metamoto-$2M.
  • June – $112.5 million:
    • Drive AI-$50M, Swift Navigation-$34M, AEye-$16M, Carmera-$6.4M, Cognata-$5 and Optimus Ride-$1.1.
  • May$9.676 billion:
    • Didi Chuxing-$5.5 billion, Nvidia-$4 billion, ClearMotion-$100M, Echodyne-$29M, DeepMap-$15M, Hesai Photonics Technology-$16M, TriLumina-$9M, AIRY 3D-$3.5M and Vivacity Labs-$3.3M.
  • April – $306.6 million:
    • Mobvoi-$180M, Peloton Technology-$60M, Luminar Technology-$36M, Renovo Auto-$10M, Aurora Innovation-$6.1M, VIST Group-$6M, DeepScale-$3M, Arbe Robotics-$2.5M, BestMile-$2M, Compound Eye-$1M.
  • March$15.343 billion:
    • Wayray-$18M, EasyMile-$15M, SB Drive-$4.6M, Starsky Robotics-$3.75M and CrowdAI-$2M. Intel acquired Mobileye for $15.3 billion.
  • February$1.024 billion
    • ZongMu Technology-$14.5M andTetraVue-$10M. Ford Motor Co acquired Argo AI-$1 billion.
  • January – $??? million: 
    • Autonomos was acquired by TomTom-unknown amount.

The SoftBank Vision Fund Effect

Plentiful money and sky high valuations are causing more companies to delay IPOs. The SoftBank Vision Fund is a key enabler of this recent phenomena. Founded in 2017 with a goal of $100 billion (they closed with $93 billion) with principle investors including SoftBank, Saudi Arabia’s sovereign wealth fund, Abu Dhabi’s national wealth fund, Apple, Foxconn, Qualcomm and Sharp, the Fund has been disbursing at a rapid pace. According to recode, the Fund, through August, had invested over $30 billion in Uber, ARM, Nvidia, WeWork, OneWeb, Flipkart, OSIsoft, Roivant, SoFi, Fanatics, Improbable, OYO, Slack, Plenty, Nauto and Brain Corp. Many on that list are involved in the self-driving industry.

The NY Times, in an article describing Masayoshi Son’s grand plan for the Fund, wrote that all these companies “have something in common: They are involved in collecting enormous amounts of data, which are crucial to creating the brains for the machines that, in the future, will do more of our jobs and creating tools that allow people to better coexist.”

Further, Son said he believed robots would inexorably change the work force and machines would become more intelligent than people, an event referred to as the “Singularity. Mr. Son [said he] is on a mission to own pieces of all the companies that may underpin the global shifts brought on by artificial intelligence to transportation, food, work, medicine and finance. His vision is not just about predictions like the Singularity. He understands that we’ll need a massive amount of data to get us to a future that’s more dependent on machines and robotics.

Bottom Line

Companies involved in the emerging self-driving industry accounted for most of the dollars invested thus far in 2017. SoftBank’s fund and Masayoshi Son’s grand plan, combined with auto companies grabbing talent through strategic acquisitions, partnerships and investments, are leading the way. Robotics-related agricultural and healthcare-related investments were a distant second and third. Fourth went to underwater drones, systems and components.

3 Crucial Characteristics of an Autonomous Robot

For a robot to truly be considered autonomous, it must possess three very important characteristics: Perception, Decision and Actuation.

 

  • Perception: For an autonomous robot, perception means sensors. Laser scanners, stereo vision cameras (eyes), bump sensors (skin and hair), force-torque sensors (muscle strain), and even spectrometers (smell) are used as input devices for a robot. Similar to how a human uses the five senses to perceive the world, a robot uses sensors to perceive the environment around it.

 

  • Decision: Autonomous robots have a similar decision making structure as humans. The “brain” of a robot is usually a computer, and it makes decisions based on what its mission is, and what information it receives along the way. Autonomous robots also have a capability that is similar to the neurological system in humans. This is called an embedded system; it operates faster and with higher authority than the computer that is executing a mission plan and parsing data. This is how the robot can decide to stop if it notices an obstacle in its way, if it detects a problem with itself, or if its emergency-stop button is pressed.
  • Actuation: People have actuators called muscles. They take all kinds of shapes and perform all kinds of functions. Autonomous robots can have all kinds of actuators too, and a motor of some kind is usually at the heart of the actuator. Whether it’s a wheel, linear actuator, or hydraulic ram, there’s always a motor converting energy into movement.

In summation, a truly autonomous robot is one that can perceive its environment, make decisions based on what it perceives and/or has been programmed to recognize and then actuate a movement or manipulation within that environment.

The best example of an autonomous robot is the Roomba. The Roomba is easily the most prolific, truly autonomous robot on the market today. While only a few hundred dollars, not thousands like many robots for manufacturing, the Roomba can make decisions and take action based on what it perceives in its environment. It can be placed in a room, left alone, and it will do its job without any help or supervision from a person. This is true autonomy.

###

The post above has been submitted to us by https://stanleyinnovation.com

 

The post 3 Crucial Characteristics of an Autonomous Robot appeared first on Roboticmagazine.

Page 2 of 3
1 2 3