Page 401 of 427
1 399 400 401 402 403 427

Robots in Depth with Erin Rapacki


In this episode of Robots in Depth, Per Sjöborg speaks with Erin Rapacki about how the FIRST robotics competition was a natural and inspiring way into her career spanning multiple robotics companies.

Erin talks about her work marketing for several startups, including selling points, examples from different industries, and launching a robotics product. She also shares her game plan for starting a new robotics company and insight on the startup and venture capital landscape.

Tons of LIDARs at CES 2018

When it comes to robocars, new LIDAR products were the story of CES 2018. Far more companies showed off LIDAR products than can succeed, with a surprising variety of approaches. CES is now the 5th largest car show, with almost the entire north hall devoted to cars. In coming articles I will look at other sensors, software teams and non-car aspects of CES, but let’s begin with the LIDARs.

Velodyne

When it comes to robocar LIDAR, the pioneer was certainly Velodyne, who largely owned the market for close to a decade. Their $75,000 64-laser spinning drum has been the core product for many years, while most newer cars feature their 16 and 32 laser “puck” shaped unit. The price was recently cut in half on the pucks, and they showed off the new $100K 128 laser unit as well as a new more rectangular unit called the Veloray that uses a vibrating mirror to steer the beam for a forward view rather than a 360 view.

The Velodyne 64-laser unit has become such an icon that its physical properties have become a point of contention. The price has always been too much for any consumer car (and this is also true of the $100K unit of course) but teams doing development have wisely realized that they want to do R&D with the most high-end unit available, expecting those capabilities to be moderately priced when it’s time to go into production. The Velodyne is also large, heavy, and because it spins, quite distinctive. Many car companies, and LIDAR companies have decried these attributes in their efforts to be different from Waymo (which uses their own in-house LIDAR now) and Velodyne. Most products out there are either clones of the Velodyne, or much smaller units with a 90 to 120 degree field of view.

Quanergy

I helped Quanergy get going, so I have an interest and won’t comment much. Their recent big milestone is going into production on their 8 line solid state LIDAR. Automakers are very big on having few to no moving parts so many companies are trying to produce that. Quanergy’s specifications are well below the Velodyne and many other units, but being in production makes a huge difference to automakers. If they don’t stumble on the production schedule, they will do well. With lower resolution instruments with smaller fields of view, you will need multiple units, so their cost must be kept low.

Luminar and 1.5 micron

The hot rising star of the hour Luminar, with its high performance long range LIDAR. Luminar is part of the subset of LIDAR makers using infrared light in the 1.5 micron (1550nm) range. The eye does not focus this light into a spot on the retina, so you can emit a lot more power from your laser without being dangerous to the eye. (There are some who dispute this and think there may be more danger to the cornea than believed.)

The ability to use more power allows longer range, and longer range is important, particularly getting a clear view of obstacles over 200m away. At that range, more conventional infrared lid Ar in the 900nm band has limitations, particularly on dark objects like a black car or pedestrian in black clothing. Even if the limitations are reduced, as some 900nm vendors claim, that’s not good enough for most teams if they are trying to make a car that goes at highway speeds.

At 80mph on dry pavement, you can hard brake in 100m. On wet pavement it’s 220m to stop, so you should not be going 80mph but people do. But a system, like a human, needs time to decide if it needs to brake, and it also doesn’t want to brake really hard. It usually takes at least 100ms just to receive a frame from sensors, and more time to figure out what it all means. No system figures it out from just one frame, usually a few frames have to be analyzed.

On top of that, 200m out, you really only get a few spots on a LIDAR from a car, and one spot from a pedestrian. That first spot is a warning but not enough to trigger braking. You need a few frames and want to see more spots and get more data to know what you’re seeing. And a safety margin on top of that. So people are very interested in what 1.5 micron LIDAR can see — as well as radar, which also sees that far.

The problem with building 1.5 micron LIDAR is that light that red is not detected with silicon. You need more exotic stuff, like indium gallium arsenide. This is not really “exotic” but compared to silicon it is. Our world knows a lot about how to do things with silicon, and making things with it is super mature and cheap. Anything else is expensive in comparison. Made in the millions, other materials won’t be.

The new Velodyne has 128 lasers and 128 photodetectors and costs a lot. 128 lasers and detectors for 1.5 micron would be super costly today. That’s why Luminar’s design uses two modules, each with a single laser and detector. The beam is very powerful, and moved super fast. It is steered by moving a mirror to sweep out rasters (scan lines) both back and forth and up and down. The faster you move your beam the more power you can put into it — what matters is how much energy it puts into your eye as it is sweeping over it, and how many times it hits your eye every second.

The Luminar can do a super detailed sweep if you only want one sweep per second, and the point clouds (collections of spots with the distance and brightness measured, projected into 3D) look extremely detailed and nice. To drive, you need at least 10 sweeps per second, and so the resolution drops a lot, but is still good.

Another limit which may surprise you is the speed of light. To see something 250m away, the light takes 1.6 microseconds to go out and back. This limits how many points per second you can get from one laser. Speeding up light is not an option. There are also limits on how much power you can put through your laser before it overheats.

(To avoid overheating, you can concentrate your point budget on the regions of greatest interest, such as those along the road or those that have known targets. I describe this in This US patent. While I did not say it, several people reported to me that Luminar’s current units require a fairly hefty box to deliver power to their LIDAR and do the computational post processing to produce the pretty point clouds.


Luminar’s device is also very expensive (they won’t publish the price) but Toyota built a test vehicle with 4 of their double-laser units, one in each of the 4 directions. Some teams are expressing the desire to see over 200m in all directions, while others think it is really only necessary to see that far in the forward direction.

You do have to see far to the left and right when you are entering an intersection with cross traffic, and you also have to see far behind if you are changing lanes on a German autobahn and somebody might be coming up on your left 100km/h faster than you (it happens!) Many teams feel that radar is sufficient there, because the type of decision you need to make (do I go or don’t I) is not nearly so complex and needs less information.

As noted before, while most early LIDARS were in the 900nm bands, Google/Waymo built their own custom LIDAR with long range, and the effort of Uber to build the same is the subject of the very famous lawsuit between the two companies. Princeton Lightwave, another company making 1.5 micron LIDAR, was recently acquired by Ford/Argo AI — an organization run by Bryan Salesky, another Google car alumnus.

I saw a few other companies with 1.5 micron LIDARS, but not as far along as Luminar, but several pointing out they did not need the large box for power and computing, suggesting they only needed about 20 to 40 watts for the whole device. One was Innovusion, which did not have a booth, but showed me the device in their suite. However, in the suite it was not possible to test range claims.

Tetravue and new Time of Flight

Tetravue showed off their radically different time of flight technology. So far there have been only a few methods to measure how long it takes for a pulse of light to go out and come back, thus learning the distance.


The classic method is basic sub-nanosecond timing. To get 1cm accuracy, you need to measure the time at close to about 50 picosecond accuracy. Circuits are getting better than that. This needs to be done with both scanning pulses, where you send out a pulse and then look in precisely that direction for the return, or with “flash” LIDAR where you send out a wide, illuminating pulse and then have an array of detector/timers which count how long each pixels took to get back. This method works at almost any distance.

The second method is to use phase. You send out a continuous beam but you modulate it. When the return comes back, it will be out of phase with the outgoing signal. How much out of phase depends on how long it took to come back, so if you can measure the phase, you can measure the time and distance. This method is much cheaper but tends to only be useful out to about 10m.

Tetravue offers a new method. They send out a flash, and put a decaying (or opening) shutter in front of an ordinary return sensor. Depending on when the light arrives, it is attenuated by the shutter. The amount it is attenuated tells you when it arrived.

I am interested in this because I played with such designs myself back in 2011, instead proposing the technique for a new type of flash camera with even illumination but did not feel you could get enough range. Indeed, Tetravue only claims a maximum range of 80m, which is challenging — it’s not fast enough for highway driving or even local expressway driving, but could be useful for lower speed urban vehicles.

The big advantages of this method are cost — it uses mostly commodity hardware — and resolution. Their demo was a 1280×720 camera, and they said they were making a 4K camera. That’s actually too much resolution for most neural networks, but digital crops from within it could be very happy, and make for the best object recognition result to be found, at least on closer targets. This might be a great tool for recognizing things like pedestrian body language and more.

At present the Tetravue uses light in the 800nm bands. That is easier to receive with more efficiency on silicon, but there is more ambient light from the sun in this band to interfere.

The different ways to steer

In addition to differing ways to measure the pulse, there are also many ways to steer it. Some of those ways include:

  • Mechanical spinning — this is what the Velodyne and other round lidars do. This allows the easy making of 360 degree view LIDARs and in the case of the Velodyne, it also stops rain from collecting on the instrument. One big issues is that people are afraid of the reliability of moving parts, especially on the grand scale.
  • Moving, spinning or vibrating mirrors. These can be sealed inside a box and the movement can be fairly small.
  • MEMS mirrors, which are microscopic mirrors on a chip Still moving, but effectively solid state. These are how DLP projectors work. Some new companies like Innovision featured LIDARs steered this way.
  • Phased arrays — you can steer a beam by having several emitters and adjusting the phase so the resulting beam goes where you want it. This is entirely solid state.
  • Spectral deflection — it is speculated that some LIDARS do vertical steering by tuning the frequency of the beam, and then using a prism so this adjusts the angle of the beam.
  • Flash LIDAR, which does not steer at all, and has an array of detectors

There are companies using all these approaches, or combinations of them.

The range of 900nm LIDAR

The most common and cheapest LIDARS, are as noted, in the 900 nm wavelength band. This is a near infrared band, but it is far enough from visible that not a lot of ambient light interferes. At the same time, up here, it’s harder to get silicon to trigger on the photons, so it’s a trade-off.

Because it acts like visible light and is focused by the lens of the eye, keeping eye safe is a problem. At a bit beyond 100m, at the maximum radiation level that is eye safe, fewer and fewer photons reflect back to the detector from dark objects. Yet many vendors are claiming ranges of 200m or even 300m in this band, while others claim it is impossible. Only a hands-on analysis can tell how reliably these longer ranges can actually be delivered, but most feel that can’t be done at the level needed.

There are some tricks which can help, including increasing sensitivity, but there are physical limits. One technique that is being considered is dynamic adjustment of the pulse power, reducing it when the target is close to the laser. Right now, if you want to send out a beam that you will see back from 200m, it needs to be so powerful that it could hurt the eye of somebody close to the sensor. Most devices try for physical eye-safety; they don’t emit power that would be unsafe to anybody. The beam itself is at a dangerous level, but it moves so fast that the total radiation at any one spot is acceptable. They have interlocks so that the laser shuts down if it ever stops moving.

To see further, you would need to detect the presence of an object (such as the side of a person’s head) that is close to you, and reduce the power before the laser scanned over to their eyes, keeping it low until past the head, then boosting it immediately to see far away objects behind the head. This can work, but now a failure of the electronic power control circuits could turn the devices into a non-safe one, which people are less willing to risk.

The Price of LIDARs

LIDAR prices are all over the map, from the $100,000 Velodyne 128-line to solid state units forecast to drop close to $100. Who are the customers that will pay these prices?

High End

Developers working on prototypes often chose the very best (and thus most expensive) unit they can get their hands on. The cost is not very important on prototypes, and you don’t plan to release for a few years. These teams make the bet that high performance units will be much cheaper when it’s time to ship. You want to develop and test with the performance you can buy in the future.’

That’s why a large fraction of teams drive around with $75,000 Velodynes or more. That’s too much for production unit but they don’t care about that. It made people predict incorrectly that robocars are far in the future.

Middle End

Units in the $7,000 to $20,000 range are too expensive as an add-on feature for a personal car. There is no component of a modern car that is this expensive, except the battery pack in a Tesla. But for a taxi, it’s a different story. With so many people sharing it, the cost is not out of the question compared to the human driver which is taken out of the equation. In fact, some would argue the $100,000 LIDAR over 6 years is still cheaper than that.

In this case, cost is not the big issue, it’s performance. Everybody wants to be out on the market first, and if a better sensor can get you out a year sooner, you’ll pay it.

Low end

LIDARs that cost (or will cost) below $1,000, especially below $250, are the ones of interest to major automakers who still think the old way: Building cars for customers with a self-drive feature.

They don’t want to add a great deal to the bill of materials for their cars and are driving all the low end, typically solid state devices.

None

None of these LIDARS are available today in automotive quantities or quality levels. Thus you see companies like Tesla, who want to ship a car today, designing without LIDAR. Those who imagine LIDAR as expensive believe that lower cost methods, like computer vision with cameras, are the right choice. They are right in the very short run (because you can’t get a LIDAR) and in the very long run (when cost will become the main driving factor) but probably wrong in the time scales that matter.

Some LIDARs are CES

Here is a list of some of the other LIDAR companies I came across at CES. There were even more than this.

AEye — MEMS LIDAR and software fused with visible light camera

Cepton — MEMS-like steering, claims 200 to 300m range

Robosense RS-lidar-m1 pro — claims 200m MEMS, 20fps, .09 deb by .2 deg, 63 x 20

Surestar R-Fans (16 and 32 laser, puck style) to 20hz,

Leishen MX series — short range for robots, 8 to 48 line units, (puck style)

ETRI LaserEye — Korean research product

Benewake Flash Lidar, shorter range.
.
Infineon box LIDAR (Innoluce prototype)

Innoviz, mems mirror 900nm LIDAR with claimed longer range

Leddartech — older company from Quebec, now making a flash LIDAR

Ouster — $12,000 super light LIDAR.

To come: Thermal Sensors, Radars, and Computer Vision, better cameras

Robotics innovations at CES 2018

The 2018 Nissan Leaf receives CES2018 Tech For a Better World Innovation Award.

Cars, cars, cars, cars. CES2018, the Consumer Technology Association’s massive annual expo, was full of self driving electric and augmented cars. Every hardware startup should visit CES before they build anything. It has to be the most humbling experience any small robotics startup could have. CES2018 is what big marketing budgets look like. And as robotics shifts more and more to consumer facing, this is what the competition looks like.

CES2018 covered a massive record breaking 2.75 million net square feet of exhibit space, featuring more than 3,900 exhibitors, including some 900 startups in the Eureka Park Innovation Zone. More than 20,000 products launched at CES 2018.

Whill’s new Model Ci intelligent personal electric vehicle
Robomart’s self driving vehicles will bring you fresh food

“The future of innovation is on display this week at CES, with technology that will empower consumers and change our world for the better,” said Gary Shapiro, president and CEO, CTA. “Every major industry is represented here at CES 2018, with global brands and a record-number of startups unveiling products that will revolutionize how we live, work and play. From the latest in self-driving vehicles, smart cities, AI, sports tech, robotics, health and fitness tech and more, the innovation at CES 2018 will further global business and spur new jobs and new markets around the world.”

In 2014, we helped produce a special “Robots on the Runway” event to bring robotics to CES. Fast forward four short years and new robots were everywhere at CES2018, ranging from agbots, tennisbots, drones, robot arms, robot prosthetics and robot wheelchairs, to the smart home companion and security robots.

Tennibot, a Robot Launch 2018 Finalist and CES2018 Innovation Award Winner
Soft Robotics a CES2018 Innovation Award Winner

It was inspiring to see so many Silicon Valley Robotics members or Robot Launch startup competition alumni winning Innovation Awards at CES2018, including Soft Robotics, Tennibot, Foldimate, Whill, Buddy from Blue Frog Robotics and Omron Adept’s robot playing ping pong.

Buddy from Blue Frog Robotics

For startups the big question is – do you build the car? Or the drone? Or do you do something innovative with the hardware, or create a platform for it? CES2018 is also shifting towards industrial and enterprise facing with their new Smart Cities Marketplace, joining the AI, Robotics, AR & VR marketplaces, and a slew of others.

With some 300,000 NSF of automotive exhibit space, the vehicle footprint at CES makes it the fifth largest stand-alone automotive show in the U.S. and was backed up by conference sessions with politicians and policy makers.

Intel celebrated innovation, explored what’s next for big data and set a Guinness World Record with its Shooting Star Mini Drone show – the most advanced software fleet of 100 drones controlled without GPS by one pilot.

CES2018 was also a little conflicted about the rise of robotics. The marketing message this year was “Let’s Go Humans”, celebrating human augmentation. However, as the second ad panel shows, CTA recognize their main attraction is showcasing the latest technologies, not necessarily the greatest technologies.

And from the looks of crowds around certain exhibits in the CES2018 Innovation Zone, after the carfest that was this year’s CES, all things Quantum will be the next big frontier. But I don’t think you have to be a Goliath at CES2018 to win the hardware market. I was most impressed by a couple of ‘Davids’ not ‘Goliaths’ in my admittedly very short CES2018 tour.

IBM’s quantum computing display at CES2018 Innovation Zone
Vesper’s VM1010 – First ZeroPower Listening MEMS Microphone

For example, Vesper’s VN1010 is the First ZeroPower Listening MEMS Microphone – a piezo electric embedded microphone chip that will allow virtually powerless voice recognition technology. With voice interface being the primary method of communicating with all of these robots and smart devices, then this little chip is worth it’s weight in cryptocurrency.

And there were robot pets everywhere. But forget the robot dogs and robot cats, shiny and metallic, plastic pastel or fur covered and cute as they were. Once again, I’m betting on the David of the field and plugging Petronic’s Mousr. I was as hooked as a feline on catnip when I saw the smart mouse in action. After a succesful Kickstarter, Mousr is now available for preorder with expected delivery in March 2018. I’ve been badly burned ordering robots from crowdfunding campaigns more than once, but I ordered a Mousr anyway.

One of many many robot pets at CES2018. But gorgeous!
Mousr, the tail that wags the cat from Petronics

CES2018 also predicted that the 2018 tech industry revenue will be $351 billion dollars – a 3.9 percent increase over 2017. For the slides and more information, visit here.

CES 2018 recap: Out of the dark ages

As close to a quarter million people descended on a city of six hundred thousand, CES 2018 became the perfect metaphor for the current state of modern society. Unfazed by floods, blackouts, and transportation problems, technology steamrolled ahead. Walking the floor last week at the Las Vegas Convention Center (LVCC), the hum of the crowd buzzed celebrating the long awaited arrival of the age of social robots, autonomous vehicles, and artificial intelligence.

In the same way that Alexa was last year’s CES story, social robots were everywhere this year, turning the show floor into a Disney-inspired mechatronic festival (see above). The applications promoted ranged from mobile airport/advertising kiosks to retail customer service agents to family in-home companions. One French upstart, Blue Frog Robotics, stood out from the crowd of ivory-colored rolling bots. Blue Frog joined the massive French contingent at Eureka Park and showcased its Buddy robot to the delight of thousands of passing attendees.

When meeting with Blue Frog’s founder Rodolphe Hasselvander, he described his vision of a robot which is closer to a family pet than the utility of a cold-voice assistant. Unlike other entrees in the space, Buddy is more like a Golden Retriever than an iPad on wheels. Its cute, blinking big eyes immediately captivate the user, forming a tight bond between robopet and human. Hasselvander demonstrated how Buddy performs a number of unique tasks, including: patrolling the home perimeter for suspicious activity; looking up recipes in the kitchen; providing instructions for do-it-yourself projects; entertaining the kids with read-along stories; and even reminding grandma to take her medicine. Buddy will be available for “adoption” in time for the 2018 Holiday season for $1,500 on Blue Frog’s website.

Blue Frog’s innovation was recognized by the Consumer Electronics Show with a “Best of Innovation” award. In accepting his honor Hasselvander exclaimed, “The icing on the cake is that our Buddy robot is named a ‘Best of Innovation’ Award, as we believe Buddy truly is the most full-featured and best-engineered companion robot in the market today this award truly shows our ongoing commitment to produce a breakthrough product that improves our lives and betters our world.”

Last year, one of the most impressive CES sites was the self-driving track in the parking lot of the LVCC’s North Hall. This year, the autonomous cars hit the streets with live demos and shared taxi rides. Lyft partnered with Aptiv to offer ride hails to and from the convention center, among other preprogrammed stops. While the cars were monitored by a safety driver, Aptiv explicitly built the experience to “showcase the positive impact automated cars will have on individual lives and communities.” Lyft was not the only self-driving vehicle on the road, autonomous shuttles developed by Boston-based Keolis and French company Navya for the city of Las Vegas were providing trips throughout the conference. Inside the LVCC, shared mobility was a big theme amongst several auto exhibitors (see below).

Torc Robotics made news days before the opening of CES with its autonomous vehicle successfully navigating around a SUV going the wrong way into oncoming traffic (see the video above). Michael Fleming, Chief Executive of Torc, shared with a packed crowd the video feed from the dashcam, providing a play-by-play account. He boasted that a human driven car next to his self-driving Lexus RX 450 followed the same path to avoid the collision. Fleming posted the dashcam video online to poll other human drivers to program a number of scenarios into his AI to avoid future clashes with reckless drivers.

The Virginia-based technology company has been conducting self-driving tests for more than a decade. Torc placed third in the 2007 DARPA Urban Challenge, a prestigious honor for researchers tackling autonomous city driving. Fleming is quick to point out that along with Waymo (Google), Torc is one of the earliest entries into the self-driving space. “There are an infinite number of corner cases,” explains Fleming, relying on petabytes of driving data built over a decade. Flemings explained to a hushed room that each time something out of the ordinary happens, like the wrong-way driver days prior, Torc logs how its car handled the situation and continues to make refinements to the car’s brain. The next time a similar situation comes up, any Torc-enabled vehicle will instantly implement the proper action. Steve Crowe of Robotics Trends said it best after sitting in the passenger seat of Fleming’s self-driving car, “I can honestly say that I didn’t realize how close we were to autonomous driving.”

AI was everywhere last week in Las Vegas, inside and outside the show. Unlike cute robots and cars, artificial intelligence is difficult to display in glitzy booths. While many companies, including Intel and Velodyne, proudly showed off their latest sensors it became very clear that true AI was the defining difference for many innovations. Tucked in the back of the Sands Convention Center, New York-based Emoshape demonstrated a new type of microchip with embedded AI to synthetically create emotions in machines.  The company’s Emotion Processing Unit (EPU) is being marketed for the next generation of social robots, self-driving cars, IoT devices, toys, and virtual-reality games.

When speaking with Emoshape founder Patrick Levy-Rosenthal, he explained his latest partnership with the European cellphone provider, Orange. Levy-Rosenthal is working with Orange’s Silicon Valley innovation office to develop new types of content with its emotion synthesis technology. As an example, Emoshape’s latest avatar JADE is an attractive female character with real-time sentiment-based protocols. According to the company’s press release, its software “engine delivers high-performance machine emotion awareness, and allows personal assistant, games, avatars, cars, IoT products and sensors to feel and develop a unique personality.” JADE and Emoshape’s VR gaming platform DREAM is being evaluated by Orange for a number of enterprise and consumer use cases.

Emotions ran high at CES, especially during the two-hour blackout. On my redeye flight back to New York, I dreamt of next year’s show with scores of Buddy robots zipping around the LVCC greeting attendees, with their embedded Emoshape personalities, leading people to fleets of Torc-mobiles around the city. Landing at JFK airport, the biting wind and frigid temperature woke me up to the reality that I have 360 more days until CES 2019.

 

Point-to-point mobile robots hot sellers

Today’s e-commerce spurs demand for reduced response times in fulfillment centers; generally has fewer products per order; and is constantly changing — increasing system complexity and the need for flexibility in automation. Today’s warehouses and distribution centers are far more complex than they were 10 years ago and employee turnover remains high; with complexity comes higher wages yet labor is increasingly hard to find — all adding to the equation.

Businesses are making investments in a variety of technologies to improve their inventory control, order processing methods, labor situation and to enhance their pick and pack operations to be faster, less rigid, requiring less physical exertion, and achieve more accurate results. “These factors are contributing to the need to convert warehouses and distribution centers into assets for competitive differentiation. Mobility will be front and center in this shift, says VDC Research in their recent ‘Taking Advantage of Apps and App Modernization in Warehousing‘ report.

[NOTE: Mobile robotic platforms can be “autonomous” (AMRs) also called self-driving or vision guided vehicles (SDVs or VGVs), which means they can navigate an uncontrolled environment without the need for physical or electromechanical guidance. Alternatively, mobile robots like automated guided vehicles (AGVs) rely on guidance devices that allow them to travel a pre-defined navigation route in relatively controlled spaces.]

Point-to-point mobile robots

Businesses of all types — from auto manufacturers to hospitals, from job shops to hotels — want to use point-to-point mobile devices instead of human messengering or towing for a variety of cost-saving reasons but mainly because it’s now achievable, cost-effective, and proven — plus there’s a real need to replace older people-dependent mobility tasks with more automated methods. Warehouse executives know that picking (grasping) still eludes robotics so they can’t buy cost-efficient robots that can pick and pack. But they also know that sensors and communication have improved to the point that navigation, collision avoidance, and low-cost mobile robots (and kits for forklifts and AGVs) can equip a warehouse with safe mobile devices that can carry or tow items from place to place thereby reducing costs and increasing productivity. Pallets, boxes and totes can be ported from point A to point B with economy and efficiency using a networked swarm of small, medium and large AMRs. Thus managers can effect economic efficiencies by cutting out wasted steps and reducing injuries and lost time through the use of point-to-point mobile robots.

Research and forecasts

  1. In a recent 193-page report by QY Research, the new autonomous mobile robots market was reported to have grossed $158 million in 2016 and is projected to reach $390 million by 2022, at a CAGR of 16.26% between 2016 and 2022. This is the most conservative of the many research reports on the subject.
  2. Tractica, a Colorado research firm with more optimistic projections and including a more expanded view of robot applications in the warehouse, recently published their Warehousing and Logistics Robots report which forecasts worldwide shipments of warehousing and logistics robots to grow from approximately 40,000 units in 2016 to 620,000 units in 2021 with estimated revenue of $22.4 billion in 2021.
  3. IDTechEx is forecasting that AGVs/carts, autonomous industrial material handling vehicles, autonomous mobile carts, autonomous mobile picking robots, autonomous trucks, and last mile delivery drones and droids will become a $75bn market by 2027 and more than double by 2038. The report also discusses how mature technologies such as AGVs are evolving to be vision and software navigated to perform their various material handling tasks; it forecasts how navigational autonomy will “induce a colossal transfer of value from wages paid for human-provided driving services towards autonomous industrial vehicles which in turn will fuel the growth in this newer material handling vehicle industry.” [NOTE: Much of this revenue will be for last-mile delivery from startups like Marble, Marathon Technologies, Piaggio and Starship.]

Tractica researched a wider audience of vendors than those cited by QY Research for their research, but neither was up-to-date with the Korean and Chinese vendors and many new startups in this space, Noticeably missing from both research reports were Asian vendors Geek+, Yujin, Quicktron and GreyOrange, startup Canvas Technology, and the nav/vision conversion kit providers Seegrid, Balyo and RoboCV. The IDTechEx report includes all of these and more.

Point-to-point robot vendors

There has been much media attention – and a good lot of hype – about autonomous mobile picking robots, autonomous trucking, and last mile delivery (by land or air), yet few are delivering products in quantity or beyond the trial and error stage which is one reason why the point-to-point vendors are doing so well. There’s also news about converting AGVs, forklifts and tugs to become vision enabled and autonomous and although these adaptions are being made, those conversions are at a slower pace than those of the vendors shown below.

Here are profiles for a select few of the most interesting vendors serving the point-to-point mobile logistics robots market:

  • MiR (Mobile Industrial Robots), a 2015 Danish startup, is the first mover in flexible point-to-point mobility. Their product line is spot on what other mobile robot manufacturers are beginning to find out from their customers: they want a bare-bones, simply-instructed, low-cost, mobile platform that can carry or tow anything anywhere. MiR’s products meet those criteria perfectly. Their towing system enables automatic pick-up and drop-off of carts carrying payloads of up to 1,100 lbs. They also provide fleet software to optimize control, coordinate orders to multiple robots, enable switch-outs when a robot must recharge itself, and provides ease of programming and integration to manufacturing and warehousing systems.
    • In a recent press release, MiR announced that 2017 sales had tripled over 2016; 2017 unit sales exceeded 1,000 robots; employees grew to 60 and are expected to double in 2018; that MiR robots were now at work in Honeywell, Argon Medical, Kamstrup, Airbus, Flex and many other facilities all over the world; and that MiR anticipates 2018 sales to increase similarly.
    • MiR’s rapid rise parallels the trend of businesses using point-to-point mobile devices instead of human messengering or towing. It isn’t MiR alone that is finding significant growth – other suppliers are also selling well above expectations.
    • MiR’s rise is also being propelled by MiR CEO Thomas Visti’s use of a tried-and-true global network of distributors/integrators which he developed for the very successful co-bot manufacturer Universal Robots. MiR’s 120 distributor/reseller network covers 40 countries and has helped MiR jump-start its global sales.
  • Swisslog, a Kuka/Midea subsidiary, has a wide and varied product line covering healthcare, warehouse and distribution centers. Their TransCar AGVs for hospitals are used as tugs and tow vehicles; their Carry Robots are used in factories and warehouses for point-to-point deliveries and to move shelves to and from pickers. Swisslog also offers extensive automated warehouse devices such as the CarryPick System and miniload cranes, pallet stacker robots, conveyor systems, and the elaborate storage system AutoStore, a top-down small parts storage and item picking system.
  • Seegrid, co-founded in 2003 by Hans Moravec of the Robotics Institute of CMU, and funded by Giant Eagle, the big East Coast grocery chain, began with the goal to help distribution centers like Giant Eagles transform their AGVs into vision guided vehicles (VGVs). They built their own line of lifts and tugs but more recently have joint-ventured with lift manufacturers to enable them to offer Seegrid vision systems which navigate without wires, lasers, magnets, or tapes, as add-on equipment. Seegrid systems focus on the movement of full pallet loads to and from storage racks and in and out of trucks.
  • Fetch Robotics has a catchy description for their mobile robots: VirtualConveyor robots — and they’ve partnered with DHL who helped Fetch produce a glowing video of how they are being used in a major parts resupply warehouse. Fetch, which started out as a pick and delivery system, has been quick to reorganize to take advantage of the demand for point-to-point robots including adding robots that can handle a variety of heavy payloads.
  • Clearpath Robotics, a maker of research UGVs, and their new OTTO line of mobile transporters, have followed a similar path as Fetch: quickly adapting to market demand by producing autonomous transporters that handle heavy and light load material. They offer two stylish and well-lighted transporters for 100kg payloads and 1500kg ones plus fleet management software. Their 360° lighting system displays familiar turn signals, brake lights status lights and audible tones so it is clearly evident where the device is going.
  • Vecna Robotics, a developer and provider of robotics and telepresence solutions for healthcare, has recently expanded into logistics with a line of general purpose mobile robots. They offer platforms, lifters, tuggers, and conversion kits so that their product line offers solution robots for the transport of pallets, cases and individual items or totes.
  • Aethon, a developer and manufacturer of tug and autonomous robots for hospitals and warehouses, was recently acquired by ST Engineering, a Singapore conglomerate with 50 years of engineering experience, a presence in over 100 countries and a focus on the aerospace, electronics, land systems and marine sectors. Aethon has made over 30 million robotic deliveries and pioneered their patented command (communication) center to maintain their robots. They offer two versions of their tug: one that can carry or pull 450kg and the other 645kg. [NOTE: Similar communication centers have since become de rigueur for maintaining operational up-time for mobile robot fleets.]
  • Omron Adept, in addition to a wide range of one-armed and Delta robots, has a line of mobile platforms and transporters, plus fleet and navigation software. Adept Technologies acquired mobile robotics pioneer MobileRobots in 2010 and was itself acquired by Omron in 2015. MobileRobots sold their mobile robots to Swisslog and many others who rebranded them. Adept’s autonomous mobile robots were unlike traditional autonomously guided vehicles (AGVs). They didn’t require any facility modifications (such as floor magnets or navigational beacons) thereby saving users up to 15% in deployment costs.
  • Amazon Robotics, Geek+, Quicktron and Grey Orange are all providers of very similar shelf-lifting robotic systems which bring those shelves to pick stations where items are selected and packed and the shelves are then returned to a dynamic free-form warehouse. Amazon has gotten the lion’s share of news because they acquired the inventor of this type of goods-to-man system from Kiva Systems and now have over 40,000 of those robots at work in Amazon fulfillment centers. But GreyOrange, Geek+ and Quicktron also have thousands of these robots deployed with many thousand more coming online all over Asia and India. They are included here because they represent an important use of mobility in fulfillment applications that as yet cannot be fully completed by picking robots.

Other vendors offering various types of mobile robotics included in the research reports by IDTechEx, QY Research and Tractica include:

  • Aviation Industry Corp of China
  • Cimcorp Automation
  • Daifuku
  • Dematic
  • Denso Wave
  • FANUC
  • Hi-Tech Robotic Systemz
  • Kawasaki Heavy Industries
  • KION Group
  • Knapp
  • Krones
  • Locus Robotics
  • Meidensha Corporation
  • Mitsubishi Electric Corporation
  • Mobile Industrial Robots (MiR)
  • Murata Machinery
  • SMP Robotics
  • SSI SCHAEFER
  • Tata Motors (BRABO)
  • Toyota Industries Corporation (Material Handling Group TMHG)
  • Vanderlande
  • Yaskawa Electric Corporation

Bottom Line

There are thousands of point-to-point operations being conducted by humans towing or pushing rolling carts in all types of businesses for all manner of purposes. Point-to-point mobile robots are popular now because they can replace those humans with simple, easy to operate devices that do the same job for less cost, hence a fast ROI. With labor costs rising, robot costs coming down, and so many gofor applications, these types of robots are no-brainers for businesses everywhere and for a long time to come.

‘Earworm melodies with strange aspects’ – what happens when AI makes music

A new AI machine creates new music from songs it’s fed, mimicking their style. Image credit – FlowMachines

by Kevin Casey

The first full-length mainstream music album co-written with the help of artificial intelligence (AI) was released on 12 January and experts believe that the science behind it could lead to a whole new style of music composition.

Popular music has always been fertile ground for technological innovation. From the electric guitar to the studio desk, laptops and the wah-wah pedal, music has the ability to absorb new inventions with ease.

Now, the release of Hello World, the first entire studio album co-created by artists and AI could mark a watershed in music composition.

Stemming from the FlowMachines project, funded by the EU’s European Research Council, the album is the fruits of the labour of 15 artists, music producer Benoit Carré, aka Skygge, and creative software designed by computer scientist and AI expert François Pachet.

Already Belgian pop sensation Stromae and chart-topping Canadian chanteuse Kiesza have been making waves with the single Hello Shadow.

The single Hello Shadow, featuring Stromae and Kiesza, is taken from the AI-co-written album, Hello World. Video credit – SKYGGE MUSIC

The software works by using neural networks – artificial intelligence systems that learn from experience by forming connections over time, thereby mimicking the biological networks of people’s brains. Pachet describes its basic job as ‘to infer the style of a corpus (of music) and generate new things’.

A musician firstly provides ‘inspiration’ to the software by exposing it to a collection of songs. Once the system understands the style required it outputs a new composition.

‘The system analyses the music in terms of beats, melody and harmony,’ said Pachet, ‘And then outputs an original piece of music based on that style.’

Creative workflow

The design challenge with this software was to make it adapt to the creative workflow of musicians without becoming a nuisance.

‘The core (problem) was how to do that so that (it) takes into account user constraints. Why? Because if you compose music, actually you never do something from scratch from A to Z,’ said Pachet.

He outlines a typical scenario where the AI software generates something and only parts of it are useful but the musician wants to keep it in, drop the rest and generate new sounds using the previous partial output. It’s a complex requirement, in other words.

‘Basically, the main contribution of the project was to find ways to do that, to do that well and to do that fast,’ said Pachet. ‘It was really an algorithmic problem.’ As creative workers driven by intuition, musicians need direct results to maintain their momentum. A clunky tool with ambivalent results would not last long in a creative workflow.

Pachet is satisfied that his technical goal is completed and that the AI will generate music ‘quickly and under user constraints’.

After years of development and refinement, the AI music tool now fits on a laptop, such as to be found in any recording studio, anywhere. In the hands of music producer Carré, the application became the creative tool that built Hello World.

Computer scientist and AI expert Francois Pachet created a system that co-writes music. Image credit – Kevin Casey/ Horizon

Collaboration

As a record producer, Carré collaborated closely with the artists in the studio to write and produce songs. So, as the resident musical expert, can Carré say if this is a new form of music?

‘It’s not a new form of music,’ he said, ‘It’s a new way to create music.’

Carré said he believes the software could lead to a new era in composition. ‘Every time there is a new tool there is a new kind of compositional style. For this project we can see that there is a new kind of melody that was created.’ He describes this as ‘earworm melodies with strange aspects’.

He also says that the process is a real collaboration between human and machine. The system creates original compositions that are then layered into songs in various forms, whether as a beat, a melody or an orchestration. During the process, artists such as Stromae are actively involved in making decisions about what and how to include the muscial fragments the AI provides.

‘You can recognise all the artists because they have made choices that are their identity, I think,’ said Carré.

Pachet concurs. ‘You know in English you say every Lennon needs a McCartney – so that’s the kind of stuff we are aiming at. We are not aiming at autonomous creation. I don’t believe that’s interesting, I don’t believe it’s possible actually, because we have no clue how to give a computer a sense of agency, a sense that something is going somewhere, (that) it has some meaning, a soul, if you want.’

The album’s title Hello World reflects the expression commonly used the very first time someone runs a new computer program or starts a website as proof that is working. Carré believes that Hello World is just the first step and the software signals the start of a whole new way of composing.

‘Maybe not next year, but in five years there will be a new set of tools that helps creators to make music,’ said Carré.


More info

FlowMachines

Max Order web comic

IntervalZero’s RTX64

RTX64 turns the Microsoft 64-bit Windows operating system into a Real-time operating system (RTOS). RTX64 enhances Windows by providing hard real-time and control capabilities to a general purpose operating system that is familiar to both developers and end users. RTX64 consists of a separate real-time subsystem (RTSS) that schedules and controls all RTSS applications independently of Windows.RTX64 is a key component of the IntervalZero RTOS Platform that comprises x86 and x64 multicore multiprocessors, Windows, and real-time Ethernet (e.g. EtherCAT or PROFINET) to outperform real-time hardware such as DSPs and radically reduce the development costs for systems that require determinism or hard real-time.

How to build a robot – the creative way

Here’s a cute video about how UK-based Rusty Squid designs robots. Rusty Squid is a studio for experimental robotic engineering and design, working within the contemporary arts.

David McGoran, Creative Director says “We explore the design space before committing to sensors and autonomous behaviour. During the design process, we created our own bespoke tools to effectively communicate with engineers, artists and designers. One of the bespoke tools featured in How We Build a Robot is called the Story Machine; we use it for, what we call, ‘Relationship Design’.”

Page 401 of 427
1 399 400 401 402 403 427