Archive 24.11.2017

Page 5 of 26
1 3 4 5 6 7 26

We built a robot care assistant for elderly people – here’s how it works

Credit: Trinity College Dublin

By Conor McGinn, Trinity College Dublin

Not all robots will take over human jobs. My colleagues and I have just unveiled a prototype care robot that we hope could take on some of the more mundane work of looking after elderly and disabled people and those with conditions such as dementia. This would leave human carers free to focus on the more personal parts of the job. The robot could also do things humans don’t have time to do now, like keeping a constant check on whether someone is safe and well, while allowing them to keep their privacy.

Our robot, named Stevie, is designed to look a bit (but not too much) like a human, with arms and a head but also wheels. This is because we need it to exist alongside people and perform tasks that may otherwise be done by a human. Giving the robot these features help people realise that they can speak to it and perhaps ask it to do things for them.

Stevie can perform some of its jobs autonomously, for example reminding users to take medication. Other tasks are designed to involve human interaction. For example, if a room sensor detects a user may have fallen over, a human operator can take control of the robot, use it to investigate the event and contact the emergency services if necessary.

Credit:Trinity College Dublin

Stevie can also help users stay socially connected. For example, the screens in the head can facilitate a Skype call, eliminating the challenges many users face using telephones. Stevie can also regulate room temperatures and light levels, tasks that help to keep the occupant comfortable and reduce possible fall hazards.

None of this will mean we won’t need human carers anymore. Stevie won’t be able to wash or dress people, for example. Instead, we’re trying to develop technology that helps and complements human care. We want to combine human empathy, compassion and decision-making with the efficiency, reliability and continuous operation of robotics.

One day, we might might be able to develop care robots that can help with more physical tasks, such as helping users out of bed. But these jobs carry much greater risks to user safety and we’ll need to do a lot more work to make this happen.

Stevie would provide benefits to carers as well as elderly or disabled users. The job of a professional care assistant is incredibly demanding, often involving long, unsocial hours in workplaces that are frequently understaffed. As a result, the industry suffers from extremely low job satisfaction. In the US, more than 35% of care assistants leave their jobs every year. By taking on some of the more routine, mundane work, robots could free carers to spend more time engaging with residents.

Of course, not everyone who is getting older or has a disability may need a robot. And there is already a range of affordable smart technology that can help people by controlling appliances with voice commands or notifying caregivers in the event of a fall or accident.

Credit: Trinity College Dublin

Smarter than smart

But for many people, this type of technology is still extremely limited. For example, how can someone with hearing problems use a conventional smart hub such as the Amazon Echo, a device that communicates exclusively through audio signals? What happens if someone falls and they are unable to press an emergency call button on a wearable device?

Stevie overcomes these problems because it can communicate in multiple ways. It can talk, make gestures, and show facial expressions and display text on its screen. In this way, it follows the principles of universal design, because it is designed to adapt to the needs of the greatest possible number of users, not just the able majority.

The ConversationWe hope to have a version of Stevie ready to sell within two years. We still need to refine the design, decide on and develop new features and make sure it complies with major regulations. All this needs to be guided by extensive user testing so we are planning a range of pilots in Ireland, the UK and the US starting in summer 2018. This will help us achieve a major milestone on the road to developing robots that really do make our lives easier.

This article was originally published on The Conversation. Read the original article.

The advantage of four legs

Shortly after SoftBank acquired his company last October, Marc Raibert of Boston Dynamics confessed, “I happen to believe that robotics will be bigger than the Internet.” Many sociologists regard the Internet as the single biggest societal invention since the dawn of the printing press in 1440. To fully understand Raibert’s point of view, one needs to analyze his zoo of robots which are best know for their awe-striking gait, balance and agility. The newest creation to walk out of Boston Dynamic’s lab is SpotMini, the latest evolution of mechanical canines.

Big Dog, Spot’s unnerving ancestor, first came to public view in 2009 and has racked up quite a YouTube following with more than six and one half million views. The technology of Big Dog led to the development of a menagerie of robots, including: more dogs, cats, mules, fleas and creatures that have no organic counterparts. Most of the mechanical barn is made up of four-legged beasts, with the exception of its humanoid robot (Atlas) and the bi-ped wheeled robot (Handle). Raibert’s vision of legged robotics spans several decades with his work at MIT’s Leg Lab. In 1992, Raibert spun his lab out of MIT and founded Boston Dynamics. In his words, “Our long-term goal is to make robots that have mobility, dexterity, perception and intelligence comparable to humans and animals, or perhaps exceeding them; this robot [Atlas] is a step along the way.”​ The creepiness of Raibert’s Big Dog has given way to SpotMini’s more polished look which incorporates 3D vision sensors on its head. The twenty-four second teaser video has already garnered nearly 6 million views in the few days since its release and promises viewers hungry for more pet tricks to “stay tuned.”

There are clear stability advantages to quadrupeds over other approaches (bipeds, wheels and treads/track plates) across multiple types of terrains and elevations. At Ted last year, Raibert demonstrated how his robo-pups, instead of drones and rovers, could be used for package delivery by easily ascending and descending stairs or other vertical obstacles. By navigating the physical world with an array of perceptive sensors, Boston Dynamics is really creating “data-driven hardware design” According to Raibert, “one of the cool things of a legged robot is its omnidirectional” movements, “it can go sideways, it can turn in place.” This is useful for a variety of work scenarios from logistics to warehousing to working in the most dangerous environments, such as the Fukushima nuclear site.

Boston Dynamics is not the only quadruped provider; recent upstarts have entered the market by utilizing Raibert’s research as an inspiration for their own bionic creatures. Chinese roboticist Xing Wang is unabashed in his gushing admiration for the founder of Boston Dynamics, “Marc Raibert … is my idol,” he said a recent interview with IEEE Spectrum Magazine. However, his veneration for Raibert has not stopped him from founding a competitive startup. Unitree Robotics aims to create quadruped robots that are as affordable as smartphones and drones. While Boston Dynamics has not sold its robots commercially, many have speculated that their current designs would cost hundreds of thousands of dollars. In the spirit of true flattery, Unitree’s first robot is, of course, a quadruped dog named Laikago. Wang aims to sell Laikago for under $30,000 dollars to science museums and eventually as companion robots. When comparing his product to Raibert’s, Wang said he wanted to “make quadruped robots simpler and smaller, so that they can help ordinary people with things like carrying objects or as companions.” Wang boasts of Laikago’s 3-degrees-of-freedom (forward, backward, and sideways), its ability to scale rough terrain, and pass anyone’s kick test.

In additional to omnidirectional benefits, locomotion is a big factor for quadrupedal machines. Professor Marco Hutter at ETH Zürich, Switzerland is the inventor of ANYmal, an autonomous robot built for the most rugged and challenging environments. Using its proprietary “dynamic running” locomotion, Hunter has deployed the machine successfully in multiple industrial settings, including the rigorous ARGOS Challenge (Autonomous Robot for Gas and Oil Sites). The objective of ARGOS is to develop “a new generation of autonomous robots” for the energy industry specifically capable of performing ‘dirty & dangerous’ inspection tasks, such as “detecting anomalies and intervening in emergency situations.” Unlike a static human frame or bipedal humanoid, AnyMAL is able to perform dynamic maneuvers with its four legs to find footholes blindly without the need for vision sensors. While wheeled systems literally get stuck in the mud, Hunter’s mechanical beast can work continuously: above ground, underneath the surface, falling, spinning and bouncing upright to perform a mission with precise accuracy. In addition, AnyMAL is loaded with a package of sensors which coordinate movements, map point-clouds environments, detect gas leaks, and listen for fissures in pipelines. Hunter explains that oil and gas sites are built for humans with stairs and varying elevations which make it impossible for biped or wheeled robots. However, a quadruped can use its actuators and integrated springs to efficiently move with ease within the site through dynamic balance and complex maneuver planning. These high mobility legged systems can fully rotate joints, crouch low to the earth and flip in places to create foot-holes.  In many ways they are like large insects creating their own tracks, Hunter says while biology is a source for inspiration, “we have to see what we can do better and different for robotics” and only then we can “build a machine that is better than nature.”

The idea of improving on nature is not new, Greek mythology is littered with half man/half beast demigods. Taking a page from the Greeks, Jiren Parikh imagines a world where nature is fused with machines. Parikh is the Chief Executive of Ghost Robotics, the maker of “Minitaur” the newest four-legged creation. Minitaur is smaller than SpotMini, Laikago, or AnyMAL as it is specifically designed to be a low-cost, high-performance alternative that can easily scale over or under any surface, regardless of weather, friction, or footing. In Parikh’s view, the purpose of legged devices is “to move over unstructured terrains like stairs, ladders, fences, rock fields, ice, in and under water.” Minitaur can actually “feel the environment at a much more granular level and allow for a greater degree of force control for maneuverability.” Parikh explains quads are inherently more energy efficient using force actuation and springs to store energy by alternating movements between limbs. Minitaur’s smaller frame leverages this to maneuver more easily around unstructured environments without damaging the assets on the ground. Using an analogy, Parikh compares quad solutions to other mobile methods, “while a tank in comparison is the perfect device for unstructured terrain it only works if one doesn’t care about destroying the environment.” Ghost Robotics very aware of the high value its customers place on their sites, as Parikh is planning on distributing its low-cost solution to a number of “industrial, infrastructure, mining and military verticals.” Essentially, Minitaur is a “a mobile IoT platform” regardless of the situation on the ground, indoor or outdoor. In speaking with Parikh, long term he envisions a world where Ghost Robotics is on the forefront of retail and home use cases from delivery bots to family pets. Parikh boasts, “You certainly won’t be woken up at 5 AM to go for a walk.”

The topic of autonomous robots will be discussed at the next RobotLabNYC event on November 29th @ 6pm with New York Times best selling author Dan Burstein / Millennium Technology Value Partners and Rhonda Binda of Venture Smarter, formerly with the Obama Administration.

Announcing the shortlist for Robot Launch 2017

The Robotics Hub, in collaboration with Silicon Valley Robotics, is currently investing in robotics, AI and sensor startups, with checks between $250,000 and $500,000. Current portfolio companies include Agility Robotics, RoBotany, Travelwits and Ariel Precision Technologies.

A team of judges has shortlisted 25 robotics startups who all deserve mention. Eight startups will be in our public voting which will start on Dec 1st and continue till December 10 on Robohub.org. Also eight startups are currently giving longer pitches to a panel of judges, so that the final winner(s) can be announced at the Silicon Valley Robotics investor showcase on December 14.

The Top 25 in alphabetical order are:

Achille, Inc.
Apellix
Augmented Robots (spin-off from GESTALT Robotics)
Betterment Labs (formerly known as MOTI)
BotsAndUs
C2RO Cloud Robotics
DroidX
Fotokite
Fruitbot, Inc.
Holotron
INF Robotics Inc.
Kinema Systems Inc.
Kiwi Campus
KOMPAÏ robotics
krtkl inc.
Mothership Aeronautics
Northstar Robotics Inc
Rabbit Tractors, Inc
Semio
TatuRobotics PTY LTD
Tennibot
UniExo
Woobo Inc.

The winners of last year’s Robot Launch 2016 startup competition, Vidi Systems, were acquired by Cognex earlier this year for an undisclosed amount. Some of the other finalists have gone on to expo at TechCrunch, and other competitions. Franklin Robotics raised $312,810 in a Kickstarter campaign, more than doubling their target. Business Insider called Franklin’s Tertill weed whacker ‘a Roomba for your garden’.

Modular Science were accepted into YCombinators Summer 2017 intake, and Dash Robotics, the spin off from Berkeley Biomimetics Lab, make the Kamigami foldable toy robots that are now being sold at all major retailers.

 

This year, the top 8 startups will receive space in the Silicon Valley Robotics Cowork Space @CircuitLaunch in Oakland. The space has lots of room for testing, full electronics lab and various prototyping equipment such as laser cutters, cnc machines, 3d printers. It’s located near Oakland International Airport and is convenient to San Francisco and the rest of Silicon Valley. There are also plenty of meeting and conference rooms. We also hold networking/mentor/investor events so you can connect with the robotics community.

Finalists also receive invaluable exposure on Robohub.org to an audience of robotics professionals and those interested in the latest robotics technologies, as well as the experience of pitching their startup to an audience of top VCs, investors and experts.

Robot Launch is supported by Silicon Valley Robotics to help more robotics startups present their technology and business models to prominent investors. Silicon Valley Robotics is the not-for-profit industry group supporting innovation and commercialization in robotics technologies. The Robotics Hub is the first investor in advanced robotics and AI startups, helping to get from ‘zero to one’ with their network of robotics and market experts.

Learn more about previous Robot Launch competitions here.

DART: Noise injection for robust imitation learning

Toyota HSR Trained with DART to Make a Bed.

By Michael Laskey, Jonathan Lee, and Ken Goldberg

In Imitation Learning (IL), also known as Learning from Demonstration (LfD), a robot learns a control policy from analyzing demonstrations of the policy performed by an algorithmic or human supervisor. For example, to teach a robot make a bed, a human would tele-operate a robot to perform the task to provide examples. The robot then learns a control policy, mapping from images/states to actions which we hope will generalize to states that were not encountered during training.

There are two variants of IL: Off-Policy, or Behavior Cloning, where the demonstrations are given independent of the robot’s policy. However, when the robot encounters novel risky states it may not have learned corrective actions. This occurs because of “covariate shift” a known challenge, where the states encountered during training differ from the states encountered during testing, reducing robustness. Common approaches to reduce covariate shift are On-Policy methods, such as DAgger, where the evolving robot’s policy is executed and the supervisor provides corrective feedback. However, On-Policy methods can be difficult for human supervisors, potentially dangerous, and computationally expensive.

This post presents a robust Off-Policy algorithm called DART and summarizes how injecting noise into the supervisor’s actions can improve robustness. The injected noise allows the supervisor to provide corrective examples for the type of errors the trained robot is likely to make. However, because the optimized noise is small, it alleviates the difficulties of On-Policy methods. Details on DART are in a paper that will be presented at the 1st Conference on Robot Learning in November.

We evaluate DART in simulation with an algorithmic supervisor on MuJoCo tasks (Walker, Humanoid, Hopper, Half-Cheetah) and physical experiments with human supervisors training a Toyota HSR robot to perform grasping in clutter, where a robot must search through clutter for a goal object. Finally, we show how DART can be applied in a complex system that leverages both classical robotics and learning techniques to teach the first robot to make a bed. For researchers who want to study and use robust Off-Policy approaches, we additionally announce the release of our codebase on GitHub.

Read More

ANDROIDS through the eye of a 19th century wooden camera

Sophia, Hanson Robotics Ltd, Hong Kong 2016 ©Wanda Tuerlinckx

Wanda Tuerlinckx and Erwin R. Boer have fused their scientific and photographic interests in robots and traveled the world since 2016 to visit roboticists to discuss and photograph their creations. The resulting set of photographs documents the technical robot revolution that is unfolding before us. The portfolio of photographs below presents the androids from Wanda’s collection of robot photographs.

But first, here’s a note from Erwin R. Boer, a scientist who connects humans and machines using symbiosis facilitating techniques mirrored after the way humans interact with each other in the here and now.


Man has created machines in the form of mechanical humans since antiquity. The sculpted faces of the early automatons gave us a glimpse of the future we currently live in. Today’s machines look like humans, move like humans, talk like humans, and at a rapidly increasing pace even think like humans. We marvel at the technological capabilities of these robots and how they are being integrated into our daily lives. The integration of robots into society requires vast technological advances. Successful interactions and communications with humans takes more than nimble technology and raw artificial intelligence – it requires the robot to have emotional intelligence, exhibiting empathy, compassion, forgiveness, and playfulness. At the same time, we fearfully watch how robots reach human potential. Human like robots come in many incarnations ranging from humanoids that have human forms but their bodies and faces are clearly robotic to androids that look in all aspects like humans and are hard to tell apart from humans. Today most androids act on the edge of the uncanny valley, a valley that reflects the fact that the complex behavior of androids, at times, is highly disturbing to humans; these disturbances are caused by unrealistic humanistic expectations of complete human ability projected onto these highly advanced androids that through interaction often gets broken by sometimes creepy realizations that they are not human. This valley is an extremely delicate space, where human and robot apparently overlap in appearance, movement and speech,. Researchers are working feverishly to remove the uncanny valley and create a flat playing field where robots are capable of producing emotions and become an integral part of society through tranquil harmonious cooperation, servitude and symbiotic interactions with humans.

Imagine seeing yourself in the mirror and then that mirror image takes on a reality that reflects your own and walks away to represent you around the world. This is what Professor Hiroshi Ishiguro envisioned when he created his HI-2 and later in life HI-4 geminoids; these geminoids are life size robotic replicates of himself. He created these geminoids to travel for him to far away conferences so that he could from the comfort of his home or office talk and act through these geminoids to give lectures and make appearances. A geminoid with its human twin offers a perfect test bed to explore the question that has inspired scientists and philosophers through the ages namely: what does it mean to be human? To be human also means to have emotional intelligence and thus to be able to understand emotions.

Humans understand emotions because when we see an emotion it triggers in us the feelings that we have when we produce that emotion and therefore we naturally project our feelings onto robots that are capable of producing emotions. Dr. David Hanson has produced a facial rubber called frubber that is perfectly suited to be pulled on the inside by little actuators as if a muscle underneath the skin contracts. His robots are capable of producing a series of emotions that elicit mirror emotions in us. The child like android Diego-san has been capable of instilling the joys of youth in many humans he interacted with. The emotional riches of Hanson’s androids help to create emotional robots that find tremendous value especially in the medical field where human compassion is critical for healing and where autistics children are benefitting from the unfailing compassion that these androids offer.

Recently, a recipient of the Nobel price of literature, Japanese author Natsume Sōseki (1867- 1916) was reincarnated in the form of his android who will give lectures at the university where professor Sōseki taught back in the 1880s. The fact that Wanda photographed android Sōseki with a camera that was used in Sōseki’s own time to take portraits of notable people creates a loop that not only transcends time but also connects two key industrial revolutions; the industrial revolution around 1900 and the robot revolution around 2000. The connection across a similar time scale is also beautifully embodied in Dr. Hanson’s android Einstein whose clones are currently being used as science teachers in many classrooms and homes around the world. Photography continues to enlighten use through imagery while robots enlighten us through physical embodied actions enriched by intelligent emotional sensitive speech.


Wanda Tuerlinckx is a photographer who connects humans and robots using a 180 year old photographic technique that mirrors how humans connect with each other across the boundaries of time through the soft understanding eye from our great grandfathers who have lived through earlier technological revolutions and presents these new technological marvels in a manner that exudes a comfortable familiarity that instills acceptance. The human element in science imposes its presence nowhere stronger than in the incarnation of a human robot that in many respects is indistinguishable from a human human. More information about Wanda and her work can be found here. You can also see her previous set of robot portraits here.

Geminoid F, Hiroshi Ishiguro Laboratories, Osaka University, Japan 2016 ©Wanda Tuerlinckx
Android Einstein, Hanson Robotics Ltd Hong Kong 2016 ©Wanda Tuerlinckx
Android Einstein, Hanson Robotics Ltd Hong Kong 2016 ©Wanda Tuerlinckx
Soseki Android, Nishogakusha University, Tokyo Japan 2017 ©Wanda Tuerlinckx
Android Hiroshi Ishiguro Laboratories. Osaka University. Japan 2017 ©Wanda Tuerlinckx
F2, Hiroshi Ishiguro Laboratories, Osaka University, Japan 2016 ©Wanda Tuerlinckx
Erica. Hiroshi Ishiguro Laboratories. Osaka University. Japan 2016 ©Wanda Tuerlinckx
Android baby. Babyclon Barcelona Spain 2017 ©Wanda Tuerlinckx
Diego-san, Qualcomm Institute University of California San Diego US 2016 ©Wanda Tuerlinckx
Geminoid HI-4 and Hiroshi Ishiguro Hiroshi Ishiguro Laboratories, Osaka University, Japan 2017. Styling Brian Enrico ©Wanda Tuerlinckx
Han, Hanson Robotics, Hong Kong 2016 ©Wanda Tuerlinckx
Sophia, Hanson Robotics Ltd, Hong Kong 2016 ©Wanda Tuerlinckx

Jibo personal robot tops Time’s Best Innovations of 2017

Credit: Photograph by Sebastian Mader for TIME

Jibo is a personal robot with a difference. It is unlike the stationary Amazon Alexa or Google Home. It attempts to offer the same repertoire of features while adding its physical presence and mobility to the mix.

Quoting Time Magazine, “Jibo looks like something straight out of a Pixar movie, with a big, round head and a face that uses animated icons to convey emotion. It’s not just that his body swivels and swerves while he speaks, as if he’s talking with his nonexistent hands. It’s not just that he can giggle and dance and turn to face you, wherever you are, as soon as you say, “Hey, Jibo.” It’s that, because of all this, Jibo seems downright human in a way that his predecessors do not. Jibo could fundamentally reshape how we interact with machines.”

Jibo can recognize up to six faces and voices yet it still has a lot to learn. Although he can help users in basic ways, like by summarizing news stories and taking photos, he can’t yet play music or work with third-party apps like Domino’s and Uber.

As an original IndieGoGo backer back in 2014, it’s been a long wait. Three years! Yes, this version of Jibo still has a lot to learn. But those skills are coming in 2018 as Jibo’s SDK becomes available to developers.

Another Chinese acquisition of a European robotics manufacturer

Huachangda Intelligent Equipment, a Chinese industrial robot integrator primarily servicing China’s auto industry, has acquired Swedish Robot System Products (RSP), a 2003 spin-off from ABB with 70 employees in Sweden, Germany and China, for an undisclosed amount. RSP manufactures grippers, welding equipment, tool changers and other peripheral products for robots.

Last month HTI Cyberneticsa Michigan industrial robotics integrator and contract manufacturer, was acquired by Chongqing Nanshang Investment Group for around $50 million. HTI provides robotic welding systems to the auto industry and also has a contract welding services facility in Mexico.

China is in the midst of a national program to develop or acquire its own technology to rival similar technologies in the West, particularly in futuristic industries such as robotics, electric cars, self-driving vehicles and artificial intelligence. China’s Made in China 2025 program will “support state capital in becoming stronger, doing better, and growing bigger, turning Chinese enterprises into world-class, globally competitive firms,” said President Xi at the recent party congress meeting in Beijing.

Made in China 2025 has specific targets and quotas. It envisions China domestically supplying 3/4 of its own industrial robots and more than 1/3 of its demand for smartphone chips by 2025, for example. These goals are backed with money: $45 billion in low-cost loans, $3 billion for advanced manufacturing efforts and billions more in other types of financial incentives and support.

Over the last two years there have been many targeted acquisitions by Chinese companies, of robotic companies in the EU and US. Following are the major ones:

Bottom line:

The consequences of China’s relentless quest for technology acquisitions may upset global trade. Their efforts have many American and European officials and business leaders pushing for tougher rules on technology purchases. Jeremie Waterman, President of the China Center at the U.S. Chamber of Commerce said the following to the NY Times.

“If Made in China 2025 achieves its goals, the U.S. and other countries would likely become just commodity exporters to China — selling oil, gas, beef and soybeans.”

Bossa Nova raises $17.5 million for shelf-scanning mobile robots

Bossa Nova Robotics, a Silicon Valley developer of autonomous service robots for the retail industry, announced the close of a $17.5 million Series B funding round led by Paxion Capital Partners and participation by Intel Capital, WRV Capital, Lucas Venture Group (LVG), and Cota Capital. This round brings Bossa Nova’s total funding to date to $41.7 million.

Bossa Nova helps large scale stores automate the collection and analysis of on-shelf inventory data by driving their sensor-laden mobile robots autonomously through aisles, navigating safely among customers and store associates. The robots capture images of store shelves and use AI to analyze the data and calculate the status of each product including location, price, and out-of-stocks which is then aggregated and delivered to management in the form of a restock action plan.

They recently began testing their robots and analytic services in 50 Walmart stores across the US. They first deployed their autonomous robots in retail stores in 2013 and have since registered more than 710 miles and 2,350 hours of autonomous inventory scanning, capturing more than 80 million product images.

“We have worked closely with Bossa Nova to help ensure this technology, which is designed to capture and share in-store data with our associates in near real time, works in our unique store environment,” said John Crecelius, vice president of central operations at Walmart. “This is meant to be a tool that helps our associates quickly identify where they can make the biggest difference for our customers.”

CMU grads launched Bossa Nova Robotics in Pittsburgh as a designer of robotic toys. In 2009 they launched two new products: Penbo, a fuzzy penguin-like robot that sang, danced, cuddled and communicated with her baby in their own Penbo language; and Prime-8, a gorilla-like loud fast-moving robot for boys. In 2011 and 2012 they changed direction: they sold off the toy business and focused on developing a mobile robot based on CMU’s ballbot technology. Later they converted to normal casters and mobility methods and spent their energies on developing camera, vision and AI analytics software to produce their latest round of shelf-scanning mobile robots.

Efficient data acquisition in MATLAB: Streaming HD video in real-time

Digital Background. Secure data concept. Digital flow, symbolizing data protection and digital technologies

The acquisition and processing of a video stream can be very computationally expensive. Typical image processing applications split the work across multiple threads, one acquiring the images, and another one running the actual algorithms. In MATLAB we can get multi-threading by interfacing with other languages, but there is a significant cost associated with exchanging data across the resulting language barrier. In this blog post, we compare different approaches for getting data through MATLAB’s Java interface, and we show how to acquire high-resolution video streams in real-time and with low overhead.

Motivation

For our booth at ICRA 2014, we put together a demo system in MATLAB that used stereo vision for tracking colored bean bags, and a robot arm to pick them up. We used two IP cameras that streamed H.264 video over RTSP. While developing the image processing and robot control parts worked as expected, it proved to be a challenge to acquire images from both video streams fast enough to be useful.

Since we did not want to switch to another language, we decided to develop a small library for acquiring video streams. The project was later open sourced as HebiCam.

Technical Background

In order to save bandwidth most IP cameras compress video before sending it over the network. Since the resulting decoding step can be computationally expensive, it is common practice to move the acquisition to a separate thread in order to reduce the load on the main processing thread.

Unfortunately, doing this in MATLAB requires some workarounds due to the language’s single threaded nature, i.e., background threads need to run in another language. Out of the box, there are two supported interfaces: MEX for calling C/C++ code, and the Java Interface for calling Java code.

While both interfaces have strengths and weaknesses, practically all use cases can be solved using either one. For this project, we chose the Java interface in order to simplify cross-platform development and the deployment of binaries. The diagram below shows an overview of the resulting system.

stereocam matlab.svg

Figure 1. System overview for a stereo vision setup

Starting background threads and getting the video stream into Java was relatively straightforward. We used the JavaCV library, which is a Java wrapper around OpenCV and FFMpeg that includes pre-compiled native binaries for all major platforms. However, passing the acquired image data from Java into MATLAB turned out to be more challenging.

The Java interface automatically converts between Java and MATLAB types by following a set of rules. This makes it much simpler to develop for than the MEX interface, but it does cause additional overhead when calling Java functions. Most of the time this overhead is negligible. However, for certain types of data, such as large and multi-dimensional matrices, the default rules are very inefficient and can become prohibitively expensive. For example, a 1080x1920x3 MATLAB image matrix gets translated to a byte[1080][1920][3] in Java, which means that there is a separate array object for every single pixel in the image.

As an additional complication, MATLAB stores image data in a different memory layout than most other libraries (e.g. OpenCV’s Mat or Java’s BufferedImage). While pixels are commonly stored in row-major order ([width][height][channels]), MATLAB stores images transposed and in column-major order ([channels][width][height]). For example, if the Red-Green-Blue pixels of a BufferedImage would be laid out as [RGB][RGB][RGB]…​, the same image would be laid out as [RRR…​][GGG…​][BBB…​] in MATLAB. Depending on the resolution this conversion can become fairly expensive.

In order to process images at a frame rate of 30 fps in real-time, the total time budget of the main MATLAB thread is 33ms per cycle. Thus, the acquisition overhead imposed on the main thread needs to be sufficiently low, i.e., a low number of milliseconds, to leave enough time for the actual processing.

Data Translation

We benchmarked five different ways to get image data from Java into MATLAB and compared their respective overhead on the main MATLAB thread. We omitted overhead incurred by background threads because it had no effect on the time budget available for image processing.

The full benchmark code is available here.

1. Default 3D Array

By default MATLAB image matrices convert to byte[height][width][channels] Java arrays. However, when converting back to MATLAB there are some additional problems:

  • byte gets converted to int8 instead of uint8, resulting in an invalid image matrix

  • changing the type back to uint8 is somewhat messy because the uint8(matrix) cast sets all negative values to zero, and the alternative typecast(matrix, 'uint8') only works on vectors

Thus, converting the data to a valid image matrix still requires several operations.

% (1) Get matrix from byte[height][width][channels]
data = getRawFormat3d(this.javaConverter);
[height,width,channels] = size(data);

% (2) Reshape matrix to vector
vector = reshape(data, width * height * channels, 1);

% (3) Cast int8 data to uint8
vector = typecast(vector, 'uint8');

% (4) Reshape vector back to original shape
image = reshape(vector, height, width, channels);

2. Compressed 1D Array

A common approach to move image data across distributed components (e.g. ROS) is to encode the individual images using MJPEG compression. Doing this within a single process is obviously wasteful, but we included it because it is common practice in many distributed systems. Since MATLAB did not offer a way to decompress jpeg images in memory, we needed to save the compressed data to a file located on a RAM disk.

% (1) Get compressed data from byte[]
data = getJpegData(this.javaConverter);

% (2) Save as jpeg file
fileID = fopen('tmp.jpg','w+');
fwrite(fileID, data, 'int8');
fclose(fileID);

% (3) Read jpeg file
image = imread('tmp.jpg');

3. Java Layout as 1D Pixel Array

Another approach is to copy the pixel array of Java’s BufferedImage and to reshape the memory using MATLAB. This is also the accepted answer for How can I convert a Java Image object to a MATLAB image matrix?.

% (1) Get data from byte[] and cast to correct type
data = getJavaPixelFormat1d(this.javaConverter);
data = typecast(data, 'uint8');
[h,w,c] = size(this.matlabImage); % get dim info

 % (2) Reshape matrix for indexing 
pixelsData = reshape (data, 3 , w, h);

 % (3) Transpose and convert from row major to col major format (RGB case) 
image = cat (3 , ...
    transpose(reshape (pixelsData(3 , :, :), w, h)), ...
    transpose(reshape (pixelsData(2 , :, :), w, h)), ...
    transpose(reshape (pixelsData(1 , :, :), w, h)));

4. MATLAB Layout as 1D Pixel Array

The fourth approach also copies a single pixel array, but this time the pixels are already stored in the MATLAB convention.

 % (1) Get data from byte[] and cast to correct type 
data = getMatlabPixelFormat1d(this.javaConverter);
[h,w,c] = size (this.matlabImage);   % get dim info 
vector = typecast(data, 'uint8' );

 % (2) Interpret pre-laid out memory as matrix 
image = reshape (vector,h,w,c);

Note that the most efficient way we found for converting the memory layout on the Java side was to use OpenCV’s split and transpose functions. The code can be found in MatlabImageConverterBGR and MatlabImageConverterGrayscale.

5. MATLAB Layout as Shared Memory

The fifth approach is the same as the fourth with the difference that the Java translation layer is bypassed entirely by using shared memory via memmapfile. Shared memory is typically used for inter-process communication, but it can also be used within a single process. Running within the same process also simplifies synchronization since MATLAB can access Java locks.

 % (1) Lock memory 
lock(this.javaObj);

 % (2) Force a copy of the data 
image = this.memFile.Data.pixels * 1 ;

 % (3) Unlock memory 
unlock(this.javaObj);

Note that the code could be interrupted (ctrl+c) at any line, so the locking mechanism would need to be able to recover from bad states, or the unlocking would need to be guaranteed by using a destructor or onCleanup.

The multiplication by one forces a copy of the data. This is necessary because under-the-hood memmapfile only returns a reference to the underlying memory.

Results

All benchmarks were run in MATLAB 2017b on an Intel NUC6I7KYK. The performance was measured using MATLAB’s timeit function. The background color of each cell in the result tables represents a rough classification of the overhead on the main MATLAB thread.

Table 1. Color classification
Color Overhead At 30 FPS

Green

<10%

<3.3 ms

Yellow

<50%

<16.5 ms

Orange

<100%

<33.3 ms

Red

>100%

>33.3 ms

The two tables below show the results for converting color (RGB) images as well as grayscale images. All measurements are in milliseconds.

table performance.svg

Figure 2. Conversion overhead on the MATLAB thread in [ms]

The results show that the default conversion, as well as jpeg compression, are essentially non-starters for color images. For grayscale images, the default conversion works significantly better due to the fact that the data is stored in a much more efficient 2D array (byte[height][width]), and that there is no need to re-order pixels by color. Unfortunately, we currently don’t have a good explanation for the ~10x cost increase (rather than ~4x) between 1080p and 4K grayscale. The behavior was the same across computers and various different memory settings.

When copying the backing array of a BufferedImage we can see another significant performance increase due to the data being stored in a single contiguous array. At this point much of the overhead comes from re-ordering pixels, so by doing the conversion beforehand, we can get another 2-3x improvement.

Lastly, although accessing shared memory in combination with the locking overhead results in a slightly higher fixed cost, the copying itself is significantly cheaper, resulting in another 2-3x speedup for high-resolution images. Overall, going through shared memory scales very well and would even allow streaming of 4K color images from two cameras simultaneously.

Final Notes

Our main takeaway was that although MATLAB’s Java interface can be inefficient for certain cases, there are simple workarounds that can remove most bottlenecks. The most important rule is to avoid converting to and from large multi-dimensional matrices whenever possible.

Another insight was that shared-memory provides a very efficient way to transfer large amounts of data to and from MATLAB. We also found it useful for inter-process communication between multiple MATLAB instances. For example, one instance can track a target while another instance can use its output for real-time control. This is useful for avoiding coupling a fast control loop to the (usually lower) frame rate of a camera or sensor.

As for our initial motivation, after creating HebiCam we were able to develop and reliably run the entire demo in MATLAB. The video below shows the setup using old-generation S-Series actuators.

The race to own the autonomous super highway: Digging deeper into Broadcom’s offer to buy Qualcomm

Governor Andrew Cuomo of the State of New York declared last month that New York City will join 13 other states in testing self-driving cars: “Autonomous vehicles have the potential to save time and save lives, and we are proud to be working with GM and Cruise on the future of this exciting new technology.” For General Motors, this represents a major milestone in the development of its Cruise software, since the the knowledge gained on Manhattan’s busy streets will be invaluable in accelerating its deep learning technology. In the spirit of one-upmanship, Waymo went one step further by declaring this week that it will be the first car company in the world to ferry passengers completely autonomously (without human engineers safeguarding the wheel).

As unmanned systems are speeding ahead toward consumer adoption, one challenge that Cruise, Waymo and others may counter within the busy canyons of urban centers is the loss of Global Positioning System (GPS) satellite data. Robots require a complex suite of coordinating data systems that bounce between orbiting satellites to provide positioning and communication links to accurately navigate our world. The only thing that is certain, as competing technologies and standards wrestle in this nascent marketplace for adoption, is the critical connection between Earth and space. Based upon the estimated growth of autonomous systems on the road, in the workplace and home in the next ten years, most unmanned systems rely heavily on the ability of commercial space providers to fulfill their boastful mission plans to launch thousands of new satellites into an already crowded lower earth orbit.

As shown by the chart below, the entry of autonomous systems will drive an explosion of data communications between terrestrial machines and space, leading to tens of thousands of new rocket launches over the next two decades. In a study done by Northern Sky Research (NSR) it projected that by 2023 there will be an estimated 5.8 million satellite Machine-to-Machine (M2M) and Internet Of Things (IOT) connections to approximately 50 billion global Internet-connected devices. In order to meet this demand, satellite providers are racing to the launch pads and raising billions in capital, even before firing up the rockets. As an example, OneWeb, which has raised more than $1.5 billion from Softbank, Qualcomm and Airbus, plans to launch its first 10 satellite constellations in 2018 which will eventually grow to 650 in the next decade. OneWeb competes with Space X, Boeing, Immarsat, Iridium, and others in deploying new satellites offering high-speed communication spectrums, such as Ku Band (12 GHz Wireless), K Band (18 GHz – 27 GHz), Ka Band (27 GHz – 40 GHz) and V Band (40 GHz – 75 GHz). The opening of new higher frequency spectrums is critical to support the explosion of increased data demands. Today there are more than 250 million cars on the road in the United States and in the future these cars will connect to the Internet, transmitting 200 million lines of code or 50 billion pings of data to safely and reliably transport passengers to their destinations everyday.

Screen Shot 2017-11-09 at 9.16.15 PMSatellites already provide millions of GPS coordinates for connected systems. However, the accuracy of GPS has been off  by as many as 5 meters, which in a fully autonomous world could mean the difference between life and death. Chip manufacturer Broadcom aims to reduce the error margin to 30 centimeters. According to a press release this summer, Broadcom’s technology works better in concrete canyons like New York which have plagued Uber drivers for years with wrong fare destinations. Using new L5 satellite signals, the chips are able to calculate receptions between points at a fast rate with lower power consumption (see diagram). Manuel del ­Castillo of Broadcom explained, “Up to now there haven’t been enough L5 satellites in orbit.” Currently there are approximately 30 L5 satellites in orbit. However, del ­Castillo suggests that could be enough to begin shipping the new chip next year, “[Even in a city’s] narrow window of sky you can see six or seven, which is pretty good. So now is the right moment to launch.”

Leading roboticist and business leader in this space, David Bruemmer explained to me this week that GPS is inherently deficient, even with L5 satellite data. In addition, current autonomous systems rely too heavily on vision systems like LIDAR and cameras, which can only see what is in front of them but not around the corner. In Bruemmer’s opinion the only solution to provide the greatest amount of coverage is one that combines vision, GPS with point-to-point communications such as Ultra Wide Band and RF beacons. Bruemmer’s company Adaptive Motion Group (AMG) is a leading innovator in this space. Ultimately, in order for AMG to efficiently work with unmanned systems it requires a communication pipeline that is wide enough to transmit space signals within a network of terrestrial high-speed frequencies.

AMG is not the only company focused on utilizing a wide breadth of data points to accurately steer robotic systems. Sandy Lobenstein, Vice President of Toyota Connected Services, explains that the Japanese car maker has been working with the antenna satellite company Kymeta to expand the data connectivity bandwidth in preparation for Toyota’s autonomous future. “We just announced a consortium with companies such as Intel and a few others to find ways to use edge computing and create standards around managing data flow in and out of vehicles with the cellphone industries or the hardware industries. Working with a company like Kymeta helps us find ways to use their technology to handle larger amounts of data and make use of large amounts of bandwidth that is available through satellite,” said Lobenstein.

sat

In a world of fully autonomous vehicles the road of the next decade truly will become an information superhighway – with data streams flowing down from thousands of satellites to receiving towers littered across the horizon, bouncing between radio masts, antennas and cars (Vehicle to Vehicle [V2V] and Vehicle to Infrastructure [V2X] communications). Last week, Broadcom ratcheted up its autonomous vehicle business by announcing the largest tech-deal ever to acquire Qualcomm for $103 billion. The acquisition would enable Broadcom to dominate both aspects of autonomous communications that rely heavily on satellite uplinks, GPS and vehicle communications. Broadcom CEO Hock Tan said, “This complementary transaction will position the combined company as a global communications leader with an impressive portfolio of technologies and products.” Days earlier, Tan attend a White House press conference with President Trump boasting of plans to move Broadcom’s corporate office back to the United States, a very timely move as federal regulators will have to approve the Broadcom/Qualcomm merger.

The merger news comes months after Intel acquired Israeli computer vision company, Mobileye for $15 billion. In addition to Intel, Broadcom also competes with Nvidia which is leading the charge to enable artificial intelligence on the road. Last month, Nvidia CEO Jensen Huang predicted that “It will take no more than 4 years to have fully autonomous cars on the road. How long it takes for the vast majority of cars on the road to become that, it really just depends.” Nvidia, which traditionally has been a computer graphics chip company, has invested heavily in developing AI chips for automated systems. Huang shares his vision, “There are many tasks in companies that can be automated… the productivity of society will go up.”

Industry consolidation represents the current state of the autonomous car race as chip makers volley to own the next generation of wireless communications. Tomorrow’s 5G mobile networks promise a tenfold increase in data streams for phones, cars, drones, industrial robots and smart city infrastructure. Researchers estimate that the number of Internet-connected chips could grow from 12 million to 90 million by the end of this year; making connectivity as ubiquitous as gasoline for connected cars. Karl Ackerman, analyst at Cowen & Co., said it best, “[Broadcom] would basically own the majority of the high-end components in the smart phone market and they would have a very significant influence on 5G standards, which are paramount as you think about autonomous vehicles and connected factories.”

The topic of autonomous transportation and smart cities will be featured at the next RobotLabNYC event series on November 29th @ 6pm with New York Times best selling author Dan Burstein/Millennium Technology Value Partners and Rhonda Binda of Venture Smarter, formerly with the Obama Administration – RSVP today.

Battery safety and fire handling

Lithium battery safety is an important issue as there are more and more reports of fires and explosions. Fires have been reported in everything from cell phones to airplanes to robots.

If you don’t know why we need to discuss this, or even if you do know, watch this clip or click here.

I am not a fire expert. This post is based on things I have heard and some basic research. Contact your local fire department for advice specific to your situation. I had very little success contacting my local fire department about this, hopefully you will have more luck.

Preventing Problems

1. Use a proper charger for your battery type and voltage. This will help prevent overcharging. In many cases lithium-ion batteries catch fire when the chargers keep dumping charge into the batteries after the maximum voltage has been reached.

2. Use a battery management system (BMS) when building battery packs with multiple cells. A BMS will monitor the voltage of each cell and halt charging when any cell reaches the maximum voltage. Cheap BMS’s will stop all charging when any cell reaches that maximum voltage. Fancier/better BMS’s can individually charge each cell to help keep the battery pack balanced. A balanced pack is good since each cell will be a similar voltage for optimal battery pack performance. The fancy BMS’s can also often detect if a single cell is reading wrong. There have been cases of a BMS’s working properly but a single cell going bad which confuses the BMS; and yields a fire/explosion.

3. Only charge batteries in designated areas. A designated area should be non combustible. For example cement, sand, cinder block and metal boxes are not uncommon to use for charging areas. For smaller cells you can purchase fire containment bags designed to put the charging battery in.
lipo lithiom ion battery charging bag

In addition the area where you charge the batteries should have good ventilation.

I have heard that on the Boeing Dreamliner, part of the solve for their batteries catching fire on planes, was to make sure that the metal enclosure that the batteries were in could withstand the heat of a battery fire. And also to make sure that in the event of a fire the fumes would vent outside the aircraft and not into the cabin.

Dreamliner airline battery fire

Dreamliner battery pack before and after fire. [SOURCE]

4. Avoid short circuiting the batteries. This can cause a thermal runoff which will also cause a fire/explosion. When I say avoid short circuiting the battery you are probably thinking of just touching the positive and negative leads together. While that is an example you need to think of other methods as well. For example puncturing a cell (such as with a drill bit or a screw driver) or compressing the cells, can cause a short-circuit with a resulting thermal runoff.

5. Don’t leave batteries unattended when charging. This will let people be available in case of a problem. However, as you saw in the video above, you might want to keep a distance from the battery in case there is a catastrophic event with flames shooting out from the battery pack.

6. Store batteries within the specs of the battery. Usually that means room temperature and out of direct sunlight (to avoid overheating).

7. Training of personnel for handling batteries, charging batteries, and what to do in the event of a fire. Having people trained in what to do can be important so that they stay safe. For example, without training people might not realize how bad the fumes are. Also make sure people know where the fire pull stations are and where the extinguishers are.

Handling Fires

1. There are 2 primary issues with a lithium fire. The fire itself and the gases released. This means that even if you think you can safely extinguish the fire, you need to keep in mind the fumes and possibly clear away from the fire.

2a. Lithium batteries which are usually in the form of small non-rechargeable batteries (such as in a watch) in theory require a class D fire extinguisher. However most people do not have one available. As such, for the most part you need to just let it burn itself out (it is good that the batteries are usually small). You can use a standard class ABC fire extinguisher to prevent the spread of the fire. Avoid using water on the lithium battery itself since the lithium and water can react violently.

2b. Lithium-ion batteries (including LiFePO4) that are used on many robots, are often larger and rechargeable. For these batteries there is not a lot of actual lithium metal in the battery, so you can use water or a class ABC fire extinguisher. You do not use a class D extinguisher with these batteries.

With both of these types of fires, there is a good chance that you will not be able to extinguish the it. If you can safety be in the area your primary goal is to allow the battery to burn in a controlled and safe manner. If possible try to get the battery outside and on a surface that is not combustible. As a reminder lithium-ion fires are very hot and flames can shoot out from various places unexpectedly; you need to be careful and only do what you can do safety. If you have a battery with multiple cells it is not uncommon for each cell to catch fire separately. So you might see the flames die down, then shortly after another cell catches fire, and then another; as the cells cascade and catch fire.

PASS fire extinguisher

A quick reminder about how to use a fire extinguisher. Remember first you Pull the pin, then you Aim at the base of the fire, then you Squeeze the handle, followed by Sweeping back and forth at the base of the fire. [SOURCE]

3. In many cases the batteries are in an enclosure where if you spray the robots with an extinguisher you will not even reach the batteries. In this case your priority is your safety (from fire and fumes), followed by preventing the fire from spreading. To prevent the fire from spreading you need to make sure all combustible material is away from the robot. If possible get the battery pack outside.

In firefighting school a common question is: Who is the most important person? To which the response is, me!

4. If charging the battery, try to unplug the battery charger from wall. Again only if you can do this safely.


I hope you found the above useful. I am not an expert on lithium batteries or fire safety. Consult with your local experts and fire authorities. I am writing this post due to the lack of information in the robotics community about battery safety.

As said by Wired “you know what they say: With great power comes great responsibility.”.


Thank you Jeff (I think he said I should call him Merlin) for some help with this topic.

Robocar/LIDAR news and video of the Apple car

Robocar news is fast and furious these days. I certainly don’t cover it all, but will point to stories that have some significance. Plus, to tease you, here’s a clip from my 4K video of the new Apple car that you’ll find at the end of this post.

Lidar acquisitions

There are many startups in the Lidar space. Recently, Ford’s Argo division purchased Princeton Lightwave a small LIDAR firm which was developing 1.5 micron lidars. 1.5 micron lidars include Waymo’s own proprietary unit (subject of the lawsuit with Uber) as well as those from Luminar and a few others. Most other lidar units work in the 900nm band of near infrared.

Near infrared lasers and optics can be based on silicon, and silicon can be cheap because there is so much experience and capacity devoted to making it. 1.5 micron light is not picked up by silicon, but it’s also not focused by the lens of the eye. That means that you can send out a lot more power and still not hurt the eye, but your detectors are harder to make. That extra power lets you see to 300m, while 900nm band lidars have trouble with black objects beyond 100m.

100m is enough for urban driving, but is not a comfortable range for higher speeds. Radar senses far but has low resolution. Thus the desire for 1.5 micron units.

GM/Cruise also bought Strobe, a small lidar firm with a very different technology. Their technology is in the 900nm band, but they are working on ways to steer the beam without moving mirrors the way Velodyne and others do. (Quanergy, in which I have stock, also is developing solid state units, as are several others.) They have not published but there is speculation on how Strobe’s unit works.

What’s interesting is that these players have decided, like Waymo, Uber and others, that they should own their own lidar technology, rather than just buy it from suppliers. This means one of two things:

  • They don’t think anybody out there can supply them with the LIDAR they need — which is what motivated Waymo to build their own, or
  • They think their in-house unit will offer them a competitive advantage

On the surface, neither of these should be true. Suppliers are all working on making lidars because most people think they will be needed. And folks are working on both 900nm and 1.5 micron units, eager to sell. It’s less clear if any of these units will be significantly better than the ones the independent suppliers are building. That’s what is needed to get a competitive edge. The unit needs to be longer range, better resolution, better field of view or more reliable than supplier units. It’s not clear why that will be, but nobody has released solid specs.

What shouldn’t matter is that they can make it cheaper in-house, especially for those working on taxi service. First of all, it’s very rare you can get something cheaper by buying the entire company. Secondly, it’s not important to make it much cheaper for the first few years of production. Nobody is going to win or lose based on whether their taxi unit costs a few thousand more dollars to make.

So there must be something else that is not revealed driving these acquisitions.

Velodyne, which pioneered the lidar industry for self-driving cars, just announced their new
128 line lidar with twice the planes and 4x the resolution of the giant “KFC Bucket” unit found on most early self-driving car prototypes.

The $75,000 64-laser Velodyne kick-started the industry, but it’s big and expensive. This new one will surely also be expensive but is smaller. In a world where many are working with the 16 and 32 laser units, the main purpose of this unit, I think, will be for those who want to develop with the sensor of the future.

Doing your R&D with high-end gear is often a wise choice. In a few years, the high resolution gear will be cheaper and ready for production, and you want to be ready for that. At the same time, it’s not yet clear how much 128 lines gains over 64. It’s not easy to identify objects in lidar, but you don’t absolutely have to so most people have not worried too much about it.

Pioneer, the Japanese electronics maker, has also developed a new lidar. Instead of trying to steer a laser entirely with solid state techniques, theirs uses MEMS mirrors, similar to those in DLP projectors. This is effectively solid state even though the mirrors actually move. I’ve seen many lidar prototypes that use such mirrors but for some reason they have not gone into production. It is a reasonably mature technology, and can be quite low cost.

More acquisitions and investment

Delphi recently bought Nutonomy, the Singapore/MIT based self-driving car startup. I remember visiting them a few time s in Singapore and finding them to be not very far along compared to others. Far enough along to fetch $400M. Delphi is generally one of the better-thinking tier one automotive suppliers and now it can go full-stack with this purchase.

Of course, since most automakers have their own full stack efforts underway, just how many customers will the full-stack tier one suppliers sell to? They may also be betting that some automakers will fail in their projects, and need to come to Delphi, Bosch or others for rescue.

Another big investment is Baidu’s “Project Apollo.” This “moonshot” is going to invest around $1.5B in self-driving ventures, and support it with open source tools. They have lots of partners, so it’s something to watch.

Other players push forward

Navya was the first company to sell a self-driving car for consumer use. Now their new vehicle is out. In addition, yesterday in Las Vegas, they started a pilot and within 2 hours had a collision. Talk about bad luck — Navya has been running vehicles for years now without such problems. It was a truck that backed into the Navya vehicle, and the truck driver’s fault, but some are faulting it because all it did was stop dead when it saw the truck coming. It did not back out of the way, though it could have. Nobody was hurt.

Aurora, the startup created by Chris Urmson after he left Waymo, has shown off its test vehicles. No surprise, they look very much like the designs of Waymo’s early vehicles, a roof rack with a Velodyne 64 laser unit on top. The team at Aurora is top notch, so expect more.

Apple’s cars are finally out and about. Back in September I passed one and took a video of it.

You can see it’s loaded with sensors. No fewer than 12 of the Velodyne 16 laser pucks and many more to boot. Apple is surely following that philosophy of designing for future hardware.

Robots’ two-pronged role in Alibaba’s $25.3 billion Singles’ Day sale

The Singles’ Day Shopping Festival held each year on November 11th is just like Black Friday, Mothers’ Day or any other sales-oriented psuedo-holiday, but bigger and more extravagant. Starting in 2009 in China as a university campus event, Singles Day has now reached all over China and to more than 180 countries.

After 24 hours of non-stop online marketing, including a star-studded Gala with film star Nicole Kidman and American rapper Pharrell Williams, the day (also known as Bachelors Day or 11/11 because the number “1” is symbolic of an individual that is alone) concluded with a sales total of ¥168 billion ($25.3 billion) on the Tmall and Taobao e-commerce networks (both belong to the Alibaba Group (NASDAQ:BABA)). Other e-commerce platforms including Amazon’s Chinese site Amazon.cn, JD.com, VIP.com and Netease’s shopping site you.163.com also participated in the 11/11 holiday with additional sales.

  • Singles Day sales reported by Alibaba were $5.8 billion in 2013, $9.3 billion in 2014, $14.3 billion in 2015, $17.8 billion in 2016, and $25.3 billion for 2017.
  • In a story reported by DealStreetAsia, JD.com said that their sales for Singles’ Day – and its 10-day run-up – reached ¥127.1 billion ($19.1 billion), a 50% jump from a year ago. JD started its sales event on Nov. 1st to reduce delivery bottlenecks and to give users more time to make their purchasing decisions.

Muyuan Li, a researcher for The Robot Report, said: “Chinese people love shopping on e-commerce websites because sellers offer merchandise 20% – 60% cheaper than in the stores, particularly on 11/11. Sites and consumer items are marketed as a game and people love to play. For example, if you deposit or purchase coupons in advance, you can get a better deal. Customers compare prices on manmanbuy.com or smzdm.com and paste product page urls into xitie.com to see the historical prices for those products. There are lotteries to win Red Envelope “cash” which are really credits that can be applied to your Singles Day shopping carts, and contests to beat other shoppers to the check out.”

Robotics-related products sold in great quantities on Singles Day. ECOVACS and other brands of robot vacuum cleaners were big sellers as were DJI and other camera drones and all sorts of robotic toys and home assistants.

The process:

Although 11/11 was a great day for people buying robotic products, it was also a significant day for the new technologies of handling those products: 1.5 billion Alibaba parcels will transverse China over the next week delivering those purchases to Chinese consumers while all those packed and shipped items will have been manufactured, boxed, cased, temporarily stored and then unskidded, unboxed, picked and packed, sorted for shipment and shipped in all manner of ways.

New technology is part of how this phenomenal day is possible: robots, automation, vision systems, navigation systems, transportation systems and 100,000 upgraded smart stores (where people viewed items but then bought online) – all were part of the mechanical underside of this day – and foretell how this is going to play forward. There were also hundreds of thousands of human workers involved in the process.

Material handling:

Here are a few of the robotics-related Chinese warehousing systems vendors that are helping move the massive volume of 11/11’s 1.5 billion packages:

Alibaba and Jack Ma:

Jack Ma, is the founder and executive chairman of Alibaba Group and one of Asia’s richest men, with a net worth of $47.8 billion, as of November 2017 according to Forbes. His story is a rags to riches one with an ‘Aha’ moment, where, on a trip to the US, he tried to search for general information about China and found none. So he and his friend created a website with a rudimentary linkage system to other websites. They named their company “China Yellow Pages.”

Quoting from Wikipedia: “In 1999 he founded Alibaba, a China-based business-to-business marketplace site in his apartment with a group of 18 friends. In October 1999 and January 2000, Alibaba twice won a total of a $25 million foreign venture capital investment. Ma wanted to improve the global e-commerce system and from 2003 he founded Taobao Marketplace, Alipay, Ali Mama and Lynx. After the rapid rise of Taobao, eBay offered to purchase the company. Ma rejected their offer, instead garnering support from Yahoo co-founder Jerry Yang with a $1 billion investment. In September 2014 Alibaba became one of the most valuable tech companies in the world after raising $25 billion, the largest initial public offering in US financial history. Ma now serves as executive chairman of Alibaba Group, which is a holding company with nine major subsidiaries: Alibaba.com, Taobao Marketplace, Tmall, eTao, Alibaba Cloud Computing, Juhuasuan, 1688.com, AliExpress.com and Alipay.”

Ma was recently quoted at the Bloomberg Global Business Forum as saying that people should stop looking to manufacturing to drive economic growth. That message ties into his and Alibaba’s overall business plan to be involved in all aspects of the online e-commerce world.

Robohub Podcast #247: ANYmal: A Ruggedized Quadrupedal Robot, with Marco Hutter



In this interview, Audrow Nash interviews Marco Hutter, Assistant Professor for Robotic Systems at ETH Zürich, about a quadrupedal robot designed for autonomous operation in challenging environments, called ANYmal. Hutter discusses ANYmal’s design, the ARGOS oil and gas rig inspection challenge, and the advantages and complexities of quadrupedal locomotion. 

Here is a video showing some of the highlights of ANYmal at the ARGOS Challenge.

 

Here is a video that shows some of the motions ANYmal is capable of.

 

 

Marco Hutter

Marco Hutter is assistant professor for Robotic Systems at ETH Zürich since 2015 and Branco Weiss Fellow since 2014. Before this, he was deputy director and group leader in the field of legged robotics at the Autonomous Systems Lab at ETH Zürich. After studying mechanical engineering, he conducted his doctoral degree in robotics at ETH with focus on design, actuation, and control of dynamic legged robotic systems. Beside his commitment within the National Centre of Competence in Research (NCCR) Digital Fabrication since October 2015 Hutter is part of the NCCR robotics and coordinator of several research projects, industrial collaborations, and international competitions (e.g. ARGOS challenge) that target the application of high-mobile autonomous vehicles in challenging environments such as for search and rescue, industrial inspection, or construction operation. His research interests lie in the development of novel machines and actuation concepts together with the underlying control, planning, and optimization algorithms for locomotion and manipulation.

 

Links

Page 5 of 26
1 3 4 5 6 7 26