The Robotics Hub, in collaboration with Silicon Valley Robotics, is currently investing in robotics, AI and sensor startups, with checks between $250,000 and $500,000. Current portfolio companies include Agility Robotics, RoBotany, Travelwits and Ariel Precision Technologies.
A team of judges has shortlisted 25 robotics startups who all deserve mention. Eight startups will be in our public voting which will start on Dec 1st and continue till December 10 on Robohub.org. Also eight startups are currently giving longer pitches to a panel of judges, so that the final winner(s) can be announced at the Silicon Valley Robotics investor showcase on December 14.
The Top 25 in alphabetical order are:
Achille, Inc.
Apellix
Augmented Robots (spin-off from GESTALT Robotics)
Betterment Labs (formerly known as MOTI)
BotsAndUs
C2RO Cloud Robotics
DroidX
Fotokite
Fruitbot, Inc.
Holotron
INF Robotics Inc.
Kinema Systems Inc.
Kiwi Campus
KOMPAÏ robotics
krtkl inc.
Mothership Aeronautics
Northstar Robotics Inc
Rabbit Tractors, Inc
Semio
TatuRobotics PTY LTD
Tennibot
UniExo
Woobo Inc.
The winners of last year’s Robot Launch 2016 startup competition, Vidi Systems, were acquired by Cognex earlier this year for an undisclosed amount. Some of the other finalists have gone on to expo at TechCrunch, and other competitions. Franklin Robotics raised $312,810 in a Kickstarter campaign, more than doubling their target. Business Insider called Franklin’s Tertill weed whacker ‘a Roomba for your garden’.
Modular Science were accepted into YCombinators Summer 2017 intake, and Dash Robotics, the spin off from Berkeley Biomimetics Lab, make the Kamigami foldable toy robots that are now being sold at all major retailers.
This year, the top 8 startups will receive space in the Silicon Valley Robotics Cowork Space @CircuitLaunch in Oakland. The space has lots of room for testing, full electronics lab and various prototyping equipment such as laser cutters, cnc machines, 3d printers. It’s located near Oakland International Airport and is convenient to San Francisco and the rest of Silicon Valley. There are also plenty of meeting and conference rooms. We also hold networking/mentor/investor events so you can connect with the robotics community.
Finalists also receive invaluable exposure on Robohub.org to an audience of robotics professionals and those interested in the latest robotics technologies, as well as the experience of pitching their startup to an audience of top VCs, investors and experts.
Robot Launch is supported by Silicon Valley Robotics to help more robotics startups present their technology and business models to prominent investors. Silicon Valley Robotics is the not-for-profit industry group supporting innovation and commercialization in robotics technologies. The Robotics Hub is the first investor in advanced robotics and AI startups, helping to get from ‘zero to one’ with their network of robotics and market experts.
Learn more about previous Robot Launch competitions here.
In Imitation Learning (IL), also known as Learning from Demonstration (LfD), a robot learns a control policy from analyzing demonstrations of the policy performed by an algorithmic or human supervisor. For example, to teach a robot make a bed, a human would tele-operate a robot to perform the task to provide examples. The robot then learns a control policy, mapping from images/states to actions which we hope will generalize to states that were not encountered during training.
There are two variants of IL: Off-Policy, or Behavior Cloning, where the demonstrations are given independent of the robot’s policy. However, when the robot encounters novel risky states it may not have learned corrective actions. This occurs because of “covariate shift” a known challenge, where the states encountered during training differ from the states encountered during testing, reducing robustness. Common approaches to reduce covariate shift are On-Policy methods, such as DAgger, where the evolving robot’s policy is executed and the supervisor provides corrective feedback. However, On-Policy methods can be difficult for human supervisors, potentially dangerous, and computationally expensive.
This post presents a robust Off-Policy algorithm called DART and summarizes how injecting noise into the supervisor’s actions can improve robustness. The injected noise allows the supervisor to provide corrective examples for the type of errors the trained robot is likely to make. However, because the optimized noise is small, it alleviates the difficulties of On-Policy methods. Details on DART are in a paper that will be presented at the 1st Conference on Robot Learning in November.
We evaluate DART in simulation with an algorithmic supervisor on MuJoCo tasks (Walker, Humanoid, Hopper, Half-Cheetah) and physical experiments with human supervisors training a Toyota HSR robot to perform grasping in clutter, where a robot must search through clutter for a goal object. Finally, we show how DART can be applied in a complex system that leverages both classical robotics and learning techniques to teach the first robot to make a bed. For researchers who want to study and use robust Off-Policy approaches, we additionally announce the release of our codebase on GitHub.
Wanda Tuerlinckx and Erwin R. Boer have fused their scientific and photographic interests in robots and traveled the world since 2016 to visit roboticists to discuss and photograph their creations. The resulting set of photographs documents the technical robot revolution that is unfolding before us. The portfolio of photographs below presents the androids from Wanda’s collection of robot photographs.
But first, here’s a note from Erwin R. Boer, a scientist who connects humans and machines using symbiosis facilitating techniques mirrored after the way humans interact with each other in the here and now.
Man has created machines in the form of mechanical humans since antiquity. The sculpted faces of the early automatons gave us a glimpse of the future we currently live in. Today’s machines look like humans, move like humans, talk like humans, and at a rapidly increasing pace even think like humans. We marvel at the technological capabilities of these robots and how they are being integrated into our daily lives. The integration of robots into society requires vast technological advances. Successful interactions and communications with humans takes more than nimble technology and raw artificial intelligence – it requires the robot to have emotional intelligence, exhibiting empathy, compassion, forgiveness, and playfulness. At the same time, we fearfully watch how robots reach human potential. Human like robots come in many incarnations ranging from humanoids that have human forms but their bodies and faces are clearly robotic to androids that look in all aspects like humans and are hard to tell apart from humans. Today most androids act on the edge of the uncanny valley, a valley that reflects the fact that the complex behavior of androids, at times, is highly disturbing to humans; these disturbances are caused by unrealistic humanistic expectations of complete human ability projected onto these highly advanced androids that through interaction often gets broken by sometimes creepy realizations that they are not human. This valley is an extremely delicate space, where human and robot apparently overlap in appearance, movement and speech,. Researchers are working feverishly to remove the uncanny valley and create a flat playing field where robots are capable of producing emotions and become an integral part of society through tranquil harmonious cooperation, servitude and symbiotic interactions with humans.
Imagine seeing yourself in the mirror and then that mirror image takes on a reality that reflects your own and walks away to represent you around the world. This is what Professor Hiroshi Ishiguro envisioned when he created his HI-2 and later in life HI-4 geminoids; these geminoids are life size robotic replicates of himself. He created these geminoids to travel for him to far away conferences so that he could from the comfort of his home or office talk and act through these geminoids to give lectures and make appearances. A geminoid with its human twin offers a perfect test bed to explore the question that has inspired scientists and philosophers through the ages namely: what does it mean to be human? To be human also means to have emotional intelligence and thus to be able to understand emotions.
Humans understand emotions because when we see an emotion it triggers in us the feelings that we have when we produce that emotion and therefore we naturally project our feelings onto robots that are capable of producing emotions. Dr. David Hanson has produced a facial rubber called frubber that is perfectly suited to be pulled on the inside by little actuators as if a muscle underneath the skin contracts. His robots are capable of producing a series of emotions that elicit mirror emotions in us. The child like android Diego-san has been capable of instilling the joys of youth in many humans he interacted with. The emotional riches of Hanson’s androids help to create emotional robots that find tremendous value especially in the medical field where human compassion is critical for healing and where autistics children are benefitting from the unfailing compassion that these androids offer.
Recently, a recipient of the Nobel price of literature, Japanese author Natsume Sōseki (1867- 1916) was reincarnated in the form of his android who will give lectures at the university where professor Sōseki taught back in the 1880s. The fact that Wanda photographed android Sōseki with a camera that was used in Sōseki’s own time to take portraits of notable people creates a loop that not only transcends time but also connects two key industrial revolutions; the industrial revolution around 1900 and the robot revolution around 2000. The connection across a similar time scale is also beautifully embodied in Dr. Hanson’s android Einstein whose clones are currently being used as science teachers in many classrooms and homes around the world. Photography continues to enlighten use through imagery while robots enlighten us through physical embodied actions enriched by intelligent emotional sensitive speech.
Wanda Tuerlinckx is a photographer who connects humans and robots using a 180 year old photographic technique that mirrors how humans connect with each other across the boundaries of time through the soft understanding eye from our great grandfathers who have lived through earlier technological revolutions and presents these new technological marvels in a manner that exudes a comfortable familiarity that instills acceptance. The human element in science imposes its presence nowhere stronger than in the incarnation of a human robot that in many respects is indistinguishable from a human human. More information about Wanda and her work can be found here. You can also see her previous set of robot portraits here.
Jibo is a personal robot with a difference. It is unlike the stationary Amazon Alexa or Google Home. It attempts to offer the same repertoire of features while adding its physical presence and mobility to the mix.
Quoting Time Magazine, “Jibo looks like something straight out of a Pixar movie, with a big, round head and a face that uses animated icons to convey emotion. It’s not just that his body swivels and swerves while he speaks, as if he’s talking with his nonexistent hands. It’s not just that he can giggle and dance and turn to face you, wherever you are, as soon as you say, “Hey, Jibo.” It’s that, because of all this, Jibo seems downright human in a way that his predecessors do not. Jibo could fundamentally reshape how we interact with machines.”
Jibo can recognize up to six faces and voices yet it still has a lot to learn. Although he can help users in basic ways, like by summarizing news stories and taking photos, he can’t yet play music or work with third-party apps like Domino’s and Uber.
As an original IndieGoGo backer back in 2014, it’s been a long wait. Three years! Yes, this version of Jibo still has a lot to learn. But those skills are coming in 2018 as Jibo’s SDK becomes available to developers.
Huachangda Intelligent Equipment, a Chinese industrial robot integrator primarily servicing China’s auto industry, has acquired Swedish Robot System Products (RSP), a 2003 spin-off from ABB with 70 employees in Sweden, Germany and China, for an undisclosed amount. RSP manufactures grippers, welding equipment, tool changers and other peripheral products for robots.
Last monthHTI Cybernetics, a Michigan industrial robotics integrator and contract manufacturer, was acquired by Chongqing Nanshang Investment Group for around $50 million. HTI provides robotic welding systems to the auto industry and also has a contract welding services facility in Mexico.
China is in the midst of a national program to develop or acquire its own technology to rival similar technologies in the West, particularly in futuristic industries such as robotics, electric cars, self-driving vehicles and artificial intelligence. China’s Made in China 2025 program will “support state capital in becoming stronger, doing better, and growing bigger, turning Chinese enterprises into world-class, globally competitive firms,” said President Xi at the recent party congress meeting in Beijing.
Made in China 2025 has specific targets and quotas. It envisions China domestically supplying 3/4 of its own industrial robots and more than 1/3 of its demand for smartphone chips by 2025, for example. These goals are backed with money: $45 billion in low-cost loans, $3 billion for advanced manufacturing efforts and billions more in other types of financial incentives and support.
Over the last two years there have been many targeted acquisitions by Chinese companies, of robotic companies in the EU and US. Following are the major ones:
Midea Group, for around $4.5 billion, last year acquired the world’s 4th largest robot manufacturer, Germany-based Kuka AG.
Baidu acquired Silicon Valley vision systems startup Xperception for an undisclosed amount.
Servotronix, an Israeli motion control and automation systems company, was acquired by Midea Group for $170 million. Servotronix, founded in 1987, develops and manufactures comprehensive and high performance motion control solutions.
RoboRobo, a Korean educational robotics kit and course provider startup, was acquired by Shengtong Printing for $62 million in cash and stock.
Hocoma, a Swiss provider of robotic and sensor-based rehabilitation solutions, merged with Chinese DIH International to provide comprehensive rehab solutions.
Bottom line:
The consequences of China’s relentless quest for technology acquisitions may upset global trade. Their efforts have many American and European officials and business leaders pushing for tougher rules on technology purchases. Jeremie Waterman, President of the China Center at the U.S. Chamber of Commerce said the following to the NY Times.
“If Made in China 2025 achieves its goals, the U.S. and other countries would likely become just commodity exporters to China — selling oil, gas, beef and soybeans.”
Bossa Nova Robotics, a Silicon Valley developer of autonomous service robots for the retail industry, announced the close of a $17.5 million Series B funding round led by Paxion Capital Partners and participation by Intel Capital, WRV Capital, Lucas Venture Group (LVG), and Cota Capital. This round brings Bossa Nova’s total funding to date to $41.7 million.
Bossa Nova helps large scale stores automate the collection and analysis of on-shelf inventory data by driving their sensor-laden mobile robots autonomously through aisles, navigating safely among customers and store associates. The robots capture images of store shelves and use AI to analyze the data and calculate the status of each product including location, price, and out-of-stocks which is then aggregated and delivered to management in the form of a restock action plan.
They recently began testing their robots and analytic services in 50 Walmart stores across the US. They first deployed their autonomous robots in retail stores in 2013 and have since registered more than 710 miles and 2,350 hours of autonomous inventory scanning, capturing more than 80 million product images.
“We have worked closely with Bossa Nova to help ensure this technology, which is designed to capture and share in-store data with our associates in near real time, works in our unique store environment,” said John Crecelius, vice president of central operations at Walmart. “This is meant to be a tool that helps our associates quickly identify where they can make the biggest difference for our customers.”
CMU grads launched Bossa Nova Robotics in Pittsburgh as a designer of robotic toys. In 2009 they launched two new products: Penbo, a fuzzy penguin-like robot that sang, danced, cuddled and communicated with her baby in their own Penbo language; and Prime-8, a gorilla-like loud fast-moving robot for boys. In 2011 and 2012 they changed direction: they sold off the toy business and focused on developing a mobile robot based on CMU’s ballbot technology. Later they converted to normal casters and mobility methods and spent their energies on developing camera, vision and AI analytics software to produce their latest round of shelf-scanning mobile robots.
The acquisition and processing of a video stream can be very computationally expensive. Typical image processing applications split the work across multiple threads, one acquiring the images, and another one running the actual algorithms. In MATLAB we can get multi-threading by interfacing with other languages, but there is a significant cost associated with exchanging data across the resulting language barrier. In this blog post, we compare different approaches for getting data through MATLAB’s Java interface, and we show how to acquire high-resolution video streams in real-time and with low overhead.
Motivation
For our booth at ICRA 2014, we put together a demo system in MATLAB that used stereo vision for tracking colored bean bags, and a robot arm to pick them up. We used two IP cameras that streamed H.264 video over RTSP. While developing the image processing and robot control parts worked as expected, it proved to be a challenge to acquire images from both video streams fast enough to be useful.
imread and webread are limited to HTTP and too slow for real-time
Since we did not want to switch to another language, we decided to develop a small library for acquiring video streams. The project was later open sourced as HebiCam.
Technical Background
In order to save bandwidth most IP cameras compress video before sending it over the network. Since the resulting decoding step can be computationally expensive, it is common practice to move the acquisition to a separate thread in order to reduce the load on the main processing thread.
Unfortunately, doing this in MATLAB requires some workarounds due to the language’s single threaded nature, i.e., background threads need to run in another language. Out of the box, there are two supported interfaces: MEX for calling C/C++ code, and the Java Interface for calling Java code.
While both interfaces have strengths and weaknesses, practically all use cases can be solved using either one. For this project, we chose the Java interface in order to simplify cross-platform development and the deployment of binaries. The diagram below shows an overview of the resulting system.
Figure 1. System overview for a stereo vision setup
Starting background threads and getting the video stream into Java was relatively straightforward. We used the JavaCV library, which is a Java wrapper around OpenCV and FFMpeg that includes pre-compiled native binaries for all major platforms. However, passing the acquired image data from Java into MATLAB turned out to be more challenging.
The Java interface automatically converts between Java and MATLAB types by following a set of rules. This makes it much simpler to develop for than the MEX interface, but it does cause additional overhead when calling Java functions. Most of the time this overhead is negligible. However, for certain types of data, such as large and multi-dimensional matrices, the default rules are very inefficient and can become prohibitively expensive. For example, a 1080x1920x3 MATLAB image matrix gets translated to a byte[1080][1920][3] in Java, which means that there is a separate array object for every single pixel in the image.
As an additional complication, MATLAB stores image data in a different memory layout than most other libraries (e.g. OpenCV’s Mat or Java’s BufferedImage). While pixels are commonly stored in row-major order ([width][height][channels]), MATLAB stores images transposed and in column-major order ([channels][width][height]). For example, if the Red-Green-Blue pixels of a BufferedImage would be laid out as [RGB][RGB][RGB]…, the same image would be laid out as [RRR…][GGG…][BBB…] in MATLAB. Depending on the resolution this conversion can become fairly expensive.
In order to process images at a frame rate of 30 fps in real-time, the total time budget of the main MATLAB thread is 33ms per cycle. Thus, the acquisition overhead imposed on the main thread needs to be sufficiently low, i.e., a low number of milliseconds, to leave enough time for the actual processing.
Data Translation
We benchmarked five different ways to get image data from Java into MATLAB and compared their respective overhead on the main MATLAB thread. We omitted overhead incurred by background threads because it had no effect on the time budget available for image processing.
By default MATLAB image matrices convert to byte[height][width][channels] Java arrays. However, when converting back to MATLAB there are some additional problems:
byte gets converted to int8 instead of uint8, resulting in an invalid image matrix
changing the type back to uint8 is somewhat messy because the uint8(matrix) cast sets all negative values to zero, and the alternative typecast(matrix, 'uint8') only works on vectors
Thus, converting the data to a valid image matrix still requires several operations.
% (1) Get matrix from byte[height][width][channels]
data = getRawFormat3d(this.javaConverter);
[height,width,channels] = size(data);
% (2) Reshape matrix to vector
vector = reshape(data, width * height * channels, 1);
% (3) Cast int8 data to uint8
vector = typecast(vector, 'uint8');
% (4) Reshape vector back to original shape
image = reshape(vector, height, width, channels);
2. Compressed 1D Array
A common approach to move image data across distributed components (e.g. ROS) is to encode the individual images using MJPEG compression. Doing this within a single process is obviously wasteful, but we included it because it is common practice in many distributed systems. Since MATLAB did not offer a way to decompress jpeg images in memory, we needed to save the compressed data to a file located on a RAM disk.
% (1) Get compressed data from byte[]
data = getJpegData(this.javaConverter);
% (2) Save as jpeg file
fileID = fopen('tmp.jpg','w+');
fwrite(fileID, data, 'int8');
fclose(fileID);
% (3) Read jpeg file
image = imread('tmp.jpg');
% (1) Get data from byte[] and cast to correct type
data = getJavaPixelFormat1d(this.javaConverter);
data = typecast(data, 'uint8');
[h,w,c] = size(this.matlabImage); % get dim info
% (2) Reshape matrix for indexing
pixelsData = reshape (data, 3 , w, h);
% (3) Transpose and convert from row major to col major format (RGB case)
image = cat (3 , ...
transpose(reshape (pixelsData(3 , :, :), w, h)), ...
transpose(reshape (pixelsData(2 , :, :), w, h)), ...
transpose(reshape (pixelsData(1 , :, :), w, h)));
4. MATLAB Layout as 1D Pixel Array
The fourth approach also copies a single pixel array, but this time the pixels are already stored in the MATLAB convention.
% (1) Get data from byte[] and cast to correct type
data = getMatlabPixelFormat1d(this.javaConverter);
[h,w,c] = size (this.matlabImage); % get dim info
vector = typecast(data, 'uint8' );
% (2) Interpret pre-laid out memory as matrix
image = reshape (vector,h,w,c);
Note that the most efficient way we found for converting the memory layout on the Java side was to use OpenCV’s split and transpose functions. The code can be found in MatlabImageConverterBGR and MatlabImageConverterGrayscale.
5. MATLAB Layout as Shared Memory
The fifth approach is the same as the fourth with the difference that the Java translation layer is bypassed entirely by using shared memory via memmapfile. Shared memory is typically used for inter-process communication, but it can also be used within a single process. Running within the same process also simplifies synchronization since MATLAB can access Java locks.
% (1) Lock memory
lock(this.javaObj);
% (2) Force a copy of the data
image = this.memFile.Data.pixels * 1 ;
% (3) Unlock memory
unlock(this.javaObj);
Note that the code could be interrupted (ctrl+c) at any line, so the locking mechanism would need to be able to recover from bad states, or the unlocking would need to be guaranteed by using a destructor or onCleanup.
The multiplication by one forces a copy of the data. This is necessary because under-the-hood memmapfile only returns a reference to the underlying memory.
Results
All benchmarks were run in MATLAB 2017b on an Intel NUC6I7KYK. The performance was measured using MATLAB’s timeit function. The background color of each cell in the result tables represents a rough classification of the overhead on the main MATLAB thread.
Table 1. Color classification
Color
Overhead
At 30 FPS
Green
<10%
<3.3 ms
Yellow
<50%
<16.5 ms
Orange
<100%
<33.3 ms
Red
>100%
>33.3 ms
The two tables below show the results for converting color (RGB) images as well as grayscale images. All measurements are in milliseconds.
Figure 2. Conversion overhead on the MATLAB thread in [ms]
The results show that the default conversion, as well as jpeg compression, are essentially non-starters for color images. For grayscale images, the default conversion works significantly better due to the fact that the data is stored in a much more efficient 2D array (byte[height][width]), and that there is no need to re-order pixels by color. Unfortunately, we currently don’t have a good explanation for the ~10x cost increase (rather than ~4x) between 1080p and 4K grayscale. The behavior was the same across computers and various different memory settings.
When copying the backing array of a BufferedImage we can see another significant performance increase due to the data being stored in a single contiguous array. At this point much of the overhead comes from re-ordering pixels, so by doing the conversion beforehand, we can get another 2-3x improvement.
Lastly, although accessing shared memory in combination with the locking overhead results in a slightly higher fixed cost, the copying itself is significantly cheaper, resulting in another 2-3x speedup for high-resolution images. Overall, going through shared memory scales very well and would even allow streaming of 4K color images from two cameras simultaneously.
Final Notes
Our main takeaway was that although MATLAB’s Java interface can be inefficient for certain cases, there are simple workarounds that can remove most bottlenecks. The most important rule is to avoid converting to and from large multi-dimensional matrices whenever possible.
Another insight was that shared-memory provides a very efficient way to transfer large amounts of data to and from MATLAB. We also found it useful for inter-process communication between multiple MATLAB instances. For example, one instance can track a target while another instance can use its output for real-time control. This is useful for avoiding coupling a fast control loop to the (usually lower) frame rate of a camera or sensor.
As for our initial motivation, after creating HebiCam we were able to develop and reliably run the entire demo in MATLAB. The video below shows the setup using old-generation S-Series actuators.
Governor Andrew Cuomo of the State of New York declared last month that New York City will join 13 other states in testing self-driving cars: “Autonomous vehicles have the potential to save time and save lives, and we are proud to be working with GM and Cruise on the future of this exciting new technology.” For General Motors, this represents a major milestone in the development of its Cruise software, since the the knowledge gained on Manhattan’s busy streets will be invaluable in accelerating its deep learning technology. In the spirit of one-upmanship, Waymo went one step further by declaring this week that it will be the first car company in the world to ferry passengers completely autonomously (without human engineers safeguarding the wheel).
As unmanned systems are speeding ahead toward consumer adoption, one challenge that Cruise, Waymo and others may counter within the busy canyons of urban centers is the loss of Global Positioning System (GPS) satellite data. Robots require a complex suite of coordinating data systems that bounce between orbiting satellites to provide positioning and communication links to accurately navigate our world. The only thing that is certain, as competing technologies and standards wrestle in this nascent marketplace for adoption, is the critical connection between Earth and space. Based upon the estimated growth of autonomous systems on the road, in the workplace and home in the next ten years, most unmanned systems rely heavily on the ability of commercial space providers to fulfill their boastful mission plans to launch thousands of new satellites into an already crowded lower earth orbit.
As shown by the chart below, the entry of autonomous systems will drive an explosion of data communications between terrestrial machines and space, leading to tens of thousands of new rocket launches over the next two decades. In a study done by Northern Sky Research (NSR) it projected that by 2023 there will be an estimated 5.8 million satellite Machine-to-Machine (M2M) and Internet Of Things (IOT) connections to approximately 50 billion global Internet-connected devices. In order to meet this demand, satellite providers are racing to the launch pads and raising billions in capital, even before firing up the rockets. As an example, OneWeb, which has raised more than $1.5 billion from Softbank, Qualcomm and Airbus, plans to launch its first 10 satellite constellations in 2018 which will eventually grow to 650 in the next decade. OneWeb competes with Space X, Boeing, Immarsat, Iridium, and others in deploying new satellites offering high-speed communication spectrums, such as Ku Band (12 GHz Wireless), K Band (18 GHz – 27 GHz), Ka Band (27 GHz – 40 GHz) and V Band (40 GHz – 75 GHz). The opening of new higher frequency spectrums is critical to support the explosion of increased data demands. Today there are more than 250 million cars on the road in the United States and in the future these cars will connect to the Internet, transmitting 200 million lines of code or 50 billion pings of data to safely and reliably transport passengers to their destinations everyday.
Satellites already provide millions of GPS coordinates for connected systems. However, the accuracy of GPS has been off by as many as 5 meters, which in a fully autonomous world could mean the difference between life and death. Chip manufacturer Broadcom aims to reduce the error margin to 30 centimeters. According to a press release this summer, Broadcom’s technology works better in concrete canyons like New York which have plagued Uber drivers for years with wrong fare destinations. Using new L5 satellite signals, the chips are able to calculate receptions between points at a fast rate with lower power consumption (see diagram). Manuel del Castillo of Broadcom explained, “Up to now there haven’t been enough L5 satellites in orbit.” Currently there are approximately 30 L5 satellites in orbit. However, del Castillo suggests that could be enough to begin shipping the new chip next year, “[Even in a city’s] narrow window of sky you can see six or seven, which is pretty good. So now is the right moment to launch.”
Leading roboticist and business leader in this space, David Bruemmer explained to me this week that GPS is inherently deficient, even with L5 satellite data. In addition, current autonomous systems rely too heavily on vision systems like LIDAR and cameras, which can only see what is in front of them but not around the corner. In Bruemmer’s opinion the only solution to provide the greatest amount of coverage is one that combines vision, GPS with point-to-point communications such as Ultra Wide Band and RF beacons. Bruemmer’s company Adaptive Motion Group (AMG) is a leading innovator in this space. Ultimately, in order for AMG to efficiently work with unmanned systems it requires a communication pipeline that is wide enough to transmit space signals within a network of terrestrial high-speed frequencies.
AMG is not the only company focused on utilizing a wide breadth of data points to accurately steer robotic systems. Sandy Lobenstein, Vice President of Toyota Connected Services, explains that the Japanese car maker has been working with the antenna satellite company Kymeta to expand the data connectivity bandwidth in preparation for Toyota’s autonomous future. “We just announced a consortium with companies such as Intel and a few others to find ways to use edge computing and create standards around managing data flow in and out of vehicles with the cellphone industries or the hardware industries. Working with a company like Kymeta helps us find ways to use their technology to handle larger amounts of data and make use of large amounts of bandwidth that is available through satellite,” said Lobenstein.
In a world of fully autonomous vehicles the road of the next decade truly will become an information superhighway – with data streams flowing down from thousands of satellites to receiving towers littered across the horizon, bouncing between radio masts, antennas and cars (Vehicle to Vehicle [V2V] and Vehicle to Infrastructure [V2X] communications). Last week, Broadcom ratcheted up its autonomous vehicle business by announcing the largest tech-deal ever to acquire Qualcomm for $103 billion. The acquisition would enable Broadcom to dominate both aspects of autonomous communications that rely heavily on satellite uplinks, GPS and vehicle communications. Broadcom CEO Hock Tan said, “This complementary transaction will position the combined company as a global communications leader with an impressive portfolio of technologies and products.” Days earlier, Tan attend a White House press conference with President Trump boasting of plans to move Broadcom’s corporate office back to the United States, a very timely move as federal regulators will have to approve the Broadcom/Qualcomm merger.
The merger news comes months after Intel acquired Israeli computer vision company, Mobileye for $15 billion. In addition to Intel, Broadcom also competes with Nvidia which is leading the charge to enable artificial intelligence on the road. Last month, Nvidia CEO Jensen Huang predicted that “It will take no more than 4 years to have fully autonomous cars on the road. How long it takes for the vast majority of cars on the road to become that, it really just depends.” Nvidia, which traditionally has been a computer graphics chip company, has invested heavily in developing AI chips for automated systems. Huang shares his vision, “There are many tasks in companies that can be automated… the productivity of society will go up.”
Industry consolidation represents the current state of the autonomous car race as chip makers volley to own the next generation of wireless communications. Tomorrow’s 5G mobile networks promise a tenfold increase in data streams for phones, cars, drones, industrial robots and smart city infrastructure. Researchers estimate that the number of Internet-connected chips could grow from 12 million to 90 million by the end of this year; making connectivity as ubiquitous as gasoline for connected cars. Karl Ackerman, analyst at Cowen & Co., said it best, “[Broadcom] would basically own the majority of the high-end components in the smart phone market and they would have a very significant influence on 5G standards, which are paramount as you think about autonomous vehicles and connected factories.”
The topic of autonomous transportation and smart cities will be featured at the next RobotLabNYC event series on November 29th @ 6pm with New York Times best selling author Dan Burstein/Millennium Technology Value Partners and Rhonda Binda of Venture Smarter, formerly with the Obama Administration – RSVP today.
Lithium battery safety is an important issue as there are more and more reports of fires and explosions. Fires have been reported in everything from cell phones to airplanes to robots.
If you don’t know why we need to discuss this, or even if you do know, watch this clip or click here.
I am not a fire expert. This post is based on things I have heard and some basic research. Contact your local fire department for advice specific to your situation. I had very little success contacting my local fire department about this, hopefully you will have more luck.
Preventing Problems
1. Use a proper charger for your battery type and voltage. This will help prevent overcharging. In many cases lithium-ion batteries catch fire when the chargers keep dumping charge into the batteries after the maximum voltage has been reached.
2. Use a battery management system (BMS) when building battery packs with multiple cells. A BMS will monitor the voltage of each cell and halt charging when any cell reaches the maximum voltage. Cheap BMS’s will stop all charging when any cell reaches that maximum voltage. Fancier/better BMS’s can individually charge each cell to help keep the battery pack balanced. A balanced pack is good since each cell will be a similar voltage for optimal battery pack performance. The fancy BMS’s can also often detect if a single cell is reading wrong. There have been cases of a BMS’s working properly but a single cell going bad which confuses the BMS; and yields a fire/explosion.
3. Only charge batteries in designated areas. A designated area should be non combustible. For example cement, sand, cinder block and metal boxes are not uncommon to use for charging areas. For smaller cells you can purchase fire containment bags designed to put the charging battery in.
In addition the area where you charge the batteries should have good ventilation.
I have heard that on the Boeing Dreamliner, part of the solve for their batteries catching fire on planes, was to make sure that the metal enclosure that the batteries were in could withstand the heat of a battery fire. And also to make sure that in the event of a fire the fumes would vent outside the aircraft and not into the cabin.
Dreamliner battery pack before and after fire. [SOURCE]
4. Avoid short circuiting the batteries. This can cause a thermal runoff which will also cause a fire/explosion. When I say avoid short circuiting the battery you are probably thinking of just touching the positive and negative leads together. While that is an example you need to think of other methods as well. For example puncturing a cell (such as with a drill bit or a screw driver) or compressing the cells, can cause a short-circuit with a resulting thermal runoff.
5. Don’t leave batteries unattended when charging. This will let people be available in case of a problem. However, as you saw in the video above, you might want to keep a distance from the battery in case there is a catastrophic event with flames shooting out from the battery pack.
6. Store batteries within the specs of the battery. Usually that means room temperature and out of direct sunlight (to avoid overheating).
7. Training of personnel for handling batteries, charging batteries, and what to do in the event of a fire. Having people trained in what to do can be important so that they stay safe. For example, without training people might not realize how bad the fumes are. Also make sure people know where the fire pull stations are and where the extinguishers are.
Handling Fires
1. There are 2 primary issues with a lithium fire. The fire itself and the gases released. This means that even if you think you can safely extinguish the fire, you need to keep in mind the fumes and possibly clear away from the fire.
2a. Lithium batteries which are usually in the form of small non-rechargeable batteries (such as in a watch) in theory require a class D fire extinguisher. However most people do not have one available. As such, for the most part you need to just let it burn itself out (it is good that the batteries are usually small). You can use a standard class ABC fire extinguisher to prevent the spread of the fire. Avoid using water on the lithium battery itself since the lithium and water can react violently.
2b. Lithium-ion batteries (including LiFePO4) that are used on many robots, are often larger and rechargeable. For these batteries there is not a lot of actual lithium metal in the battery, so you can use water or a class ABC fire extinguisher. You do not use a class D extinguisher with these batteries.
With both of these types of fires, there is a good chance that you will not be able to extinguish the it. If you can safety be in the area your primary goal is to allow the battery to burn in a controlled and safe manner. If possible try to get the battery outside and on a surface that is not combustible. As a reminder lithium-ion fires are very hot and flames can shoot out from various places unexpectedly; you need to be careful and only do what you can do safety. If you have a battery with multiple cells it is not uncommon for each cell to catch fire separately. So you might see the flames die down, then shortly after another cell catches fire, and then another; as the cells cascade and catch fire.
A quick reminder about how to use a fire extinguisher. Remember first you Pull the pin, then you Aim at the base of the fire, then you Squeeze the handle, followed by Sweeping back and forth at the base of the fire. [SOURCE]
3. In many cases the batteries are in an enclosure where if you spray the robots with an extinguisher you will not even reach the batteries. In this case your priority is your safety (from fire and fumes), followed by preventing the fire from spreading. To prevent the fire from spreading you need to make sure all combustible material is away from the robot. If possible get the battery pack outside.
In firefighting school a common question is: Who is the most important person? To which the response is, me!
4. If charging the battery, try to unplug the battery charger from wall. Again only if you can do this safely.
I hope you found the above useful. I am not an expert on lithium batteries or fire safety. Consult with your local experts and fire authorities. I am writing this post due to the lack of information in the robotics community about battery safety.
Robocar news is fast and furious these days. I certainly don’t cover it all, but will point to stories that have some significance. Plus, to tease you, here’s a clip from my 4K video of the new Apple car that you’ll find at the end of this post.
Lidar acquisitions
There are many startups in the Lidar space. Recently, Ford’s Argo division purchased Princeton Lightwave a small LIDAR firm which was developing 1.5 micron lidars. 1.5 micron lidars include Waymo’s own proprietary unit (subject of the lawsuit with Uber) as well as those from Luminar and a few others. Most other lidar units work in the 900nm band of near infrared.
Near infrared lasers and optics can be based on silicon, and silicon can be cheap because there is so much experience and capacity devoted to making it. 1.5 micron light is not picked up by silicon, but it’s also not focused by the lens of the eye. That means that you can send out a lot more power and still not hurt the eye, but your detectors are harder to make. That extra power lets you see to 300m, while 900nm band lidars have trouble with black objects beyond 100m.
100m is enough for urban driving, but is not a comfortable range for higher speeds. Radar senses far but has low resolution. Thus the desire for 1.5 micron units.
GM/Cruise also bought Strobe, a small lidar firm with a very different technology. Their technology is in the 900nm band, but they are working on ways to steer the beam without moving mirrors the way Velodyne and others do. (Quanergy, in which I have stock, also is developing solid state units, as are several others.) They have not published but there is speculation on how Strobe’s unit works.
What’s interesting is that these players have decided, like Waymo, Uber and others, that they should own their own lidar technology, rather than just buy it from suppliers. This means one of two things:
They don’t think anybody out there can supply them with the LIDAR they need — which is what motivated Waymo to build their own, or
They think their in-house unit will offer them a competitive advantage
On the surface, neither of these should be true. Suppliers are all working on making lidars because most people think they will be needed. And folks are working on both 900nm and 1.5 micron units, eager to sell. It’s less clear if any of these units will be significantly better than the ones the independent suppliers are building. That’s what is needed to get a competitive edge. The unit needs to be longer range, better resolution, better field of view or more reliable than supplier units. It’s not clear why that will be, but nobody has released solid specs.
What shouldn’t matter is that they can make it cheaper in-house, especially for those working on taxi service. First of all, it’s very rare you can get something cheaper by buying the entire company. Secondly, it’s not important to make it much cheaper for the first few years of production. Nobody is going to win or lose based on whether their taxi unit costs a few thousand more dollars to make.
So there must be something else that is not revealed driving these acquisitions.
Velodyne, which pioneered the lidar industry for self-driving cars, just announced their new 128 line lidar with twice the planes and 4x the resolution of the giant “KFC Bucket” unit found on most early self-driving car prototypes.
The $75,000 64-laser Velodyne kick-started the industry, but it’s big and expensive. This new one will surely also be expensive but is smaller. In a world where many are working with the 16 and 32 laser units, the main purpose of this unit, I think, will be for those who want to develop with the sensor of the future.
Doing your R&D with high-end gear is often a wise choice. In a few years, the high resolution gear will be cheaper and ready for production, and you want to be ready for that. At the same time, it’s not yet clear how much 128 lines gains over 64. It’s not easy to identify objects in lidar, but you don’t absolutely have to so most people have not worried too much about it.
Pioneer, the Japanese electronics maker, has also developed a new lidar. Instead of trying to steer a laser entirely with solid state techniques, theirs uses MEMS mirrors, similar to those in DLP projectors. This is effectively solid state even though the mirrors actually move. I’ve seen many lidar prototypes that use such mirrors but for some reason they have not gone into production. It is a reasonably mature technology, and can be quite low cost.
More acquisitions and investment
Delphi recently bought Nutonomy, the Singapore/MIT based self-driving car startup. I remember visiting them a few time s in Singapore and finding them to be not very far along compared to others. Far enough along to fetch $400M. Delphi is generally one of the better-thinking tier one automotive suppliers and now it can go full-stack with this purchase.
Of course, since most automakers have their own full stack efforts underway, just how many customers will the full-stack tier one suppliers sell to? They may also be betting that some automakers will fail in their projects, and need to come to Delphi, Bosch or others for rescue.
Another big investment is Baidu’s “Project Apollo.” This “moonshot” is going to invest around $1.5B in self-driving ventures, and support it with open source tools. They have lots of partners, so it’s something to watch.
Other players push forward
Navya was the first company to sell a self-driving car for consumer use. Now their new vehicle is out. In addition, yesterday in Las Vegas, they started a pilot and within 2 hours had a collision. Talk about bad luck — Navya has been running vehicles for years now without such problems. It was a truck that backed into the Navya vehicle, and the truck driver’s fault, but some are faulting it because all it did was stop dead when it saw the truck coming. It did not back out of the way, though it could have. Nobody was hurt.
Aurora, the startup created by Chris Urmson after he left Waymo, has shown off its test vehicles. No surprise, they look very much like the designs of Waymo’s early vehicles, a roof rack with a Velodyne 64 laser unit on top. The team at Aurora is top notch, so expect more.
Apple’s cars are finally out and about. Back in September I passed one and took a video of it.
You can see it’s loaded with sensors. No fewer than 12 of the Velodyne 16 laser pucks and many more to boot. Apple is surely following that philosophy of designing for future hardware.
The Singles’ Day Shopping Festival held each year on November 11th is just like Black Friday, Mothers’ Day or any other sales-oriented psuedo-holiday, but bigger and more extravagant. Starting in 2009 in China as a university campus event, Singles Day has now reached all over China and to more than 180 countries.
After 24 hours of non-stop online marketing, including a star-studded Gala with film star Nicole Kidman and American rapper Pharrell Williams, the day (also known as Bachelors Day or 11/11 because the number “1” is symbolic of an individual that is alone) concluded with a sales total of ¥168 billion ($25.3 billion) on the Tmall and Taobao e-commerce networks (both belong to the Alibaba Group (NASDAQ:BABA)). Other e-commerce platforms including Amazon’s Chinese site Amazon.cn, JD.com, VIP.com and Netease’s shopping site you.163.com also participated in the 11/11 holiday with additional sales.
Singles Day sales reported by Alibaba were $5.8 billion in 2013, $9.3 billion in 2014, $14.3 billion in 2015, $17.8 billion in 2016, and $25.3 billion for 2017.
In a story reported by DealStreetAsia, JD.com said that their sales for Singles’ Day – and its 10-day run-up – reached ¥127.1 billion ($19.1 billion), a 50% jump from a year ago. JD started its sales event on Nov. 1st to reduce delivery bottlenecks and to give users more time to make their purchasing decisions.
Muyuan Li, a researcher for The Robot Report, said: “Chinese people love shopping on e-commerce websites because sellers offer merchandise 20% – 60% cheaper than in the stores, particularly on 11/11. Sites and consumer items are marketed as a game and people love to play. For example, if you deposit or purchase coupons in advance, you can get a better deal. Customers compare prices on manmanbuy.com or smzdm.com and paste product page urls into xitie.com to see the historical prices for those products. There are lotteries to win Red Envelope “cash” which are really credits that can be applied to your Singles Day shopping carts, and contests to beat other shoppers to the check out.”
Robotics-related products sold in great quantities on Singles Day. ECOVACS and other brands of robot vacuum cleaners were big sellers as were DJI and other camera drones and all sorts of robotic toys and home assistants.
The process:
Although 11/11 was a great day for people buying robotic products, it was also a significant day for the new technologies of handling those products: 1.5 billion Alibaba parcels will transverse China over the next week delivering those purchases to Chinese consumers while all those packed and shipped items will have been manufactured, boxed, cased, temporarily stored and then unskidded, unboxed, picked and packed, sorted for shipment and shipped in all manner of ways.
New technology is part of how this phenomenal day is possible: robots, automation, vision systems, navigation systems, transportation systems and 100,000 upgraded smart stores (where people viewed items but then bought online) – all were part of the mechanical underside of this day – and foretell how this is going to play forward. There were also hundreds of thousands of human workers involved in the process.
Material handling:
Here are a few of the robotics-related Chinese warehousing systems vendors that are helping move the massive volume of 11/11’s 1.5 billion packages:
Siasun Robot & Automation — Siasun, China’s largest robot manufacturer, posted an article about how their robots are used in logistics centers around China to handle 11/11 and the steadily growing consumer e-commerce volume.
Alibaba and Jack Ma:
Jack Ma, is the founder and executive chairman of Alibaba Group and one of Asia’s richest men, with a net worth of $47.8 billion, as of November 2017 according to Forbes. His story is a rags to riches one with an ‘Aha’ moment, where, on a trip to the US, he tried to search for general information about China and found none. So he and his friend created a website with a rudimentary linkage system to other websites. They named their company “China Yellow Pages.”
Quoting from Wikipedia: “In 1999 he founded Alibaba, a China-based business-to-business marketplace site in his apartment with a group of 18 friends. In October 1999 and January 2000, Alibaba twice won a total of a $25 million foreign venture capital investment. Ma wanted to improve the global e-commerce system and from 2003 he founded Taobao Marketplace, Alipay, Ali Mama and Lynx. After the rapid rise of Taobao, eBay offered to purchase the company. Ma rejected their offer, instead garnering support from Yahoo co-founder Jerry Yang with a $1 billion investment. In September 2014 Alibaba became one of the most valuable tech companies in the world after raising $25 billion, the largest initial public offering in US financial history. Ma now serves as executive chairman of Alibaba Group, which is a holding company with nine major subsidiaries: Alibaba.com, Taobao Marketplace, Tmall, eTao, Alibaba Cloud Computing, Juhuasuan, 1688.com, AliExpress.com and Alipay.”
Ma was recently quoted at the Bloomberg Global Business Forum as saying that people should stop looking to manufacturing to drive economic growth. That message ties into his and Alibaba’s overall business plan to be involved in all aspects of the online e-commerce world.
In this interview, Audrow Nash interviews Marco Hutter, Assistant Professor for Robotic Systems at ETH Zürich, about a quadrupedal robot designed for autonomous operation in challenging environments, called ANYmal. Hutter discusses ANYmal’s design, the ARGOS oil and gas rig inspection challenge, and the advantages and complexities of quadrupedal locomotion.
Here is a video showing some of the highlights of ANYmal at the ARGOS Challenge.
Here is a video that shows some of the motions ANYmal is capable of.
Marco Hutter
Marco Hutter is assistant professor for Robotic Systems at ETH Zürich since 2015 and Branco Weiss Fellow since 2014. Before this, he was deputy director and group leader in the field of legged robotics at the Autonomous Systems Lab at ETH Zürich. After studying mechanical engineering, he conducted his doctoral degree in robotics at ETH with focus on design, actuation, and control of dynamic legged robotic systems. Beside his commitment within the National Centre of Competence in Research (NCCR) Digital Fabrication since October 2015 Hutter is part of the NCCR robotics and coordinator of several research projects, industrial collaborations, and international competitions (e.g. ARGOS challenge) that target the application of high-mobile autonomous vehicles in challenging environments such as for search and rescue, industrial inspection, or construction operation. His research interests lie in the development of novel machines and actuation concepts together with the underlying control, planning, and optimization algorithms for locomotion and manipulation.
In a major milestone for robocars, Waymo has announced they will deploy in Phoenix with no human safety drivers behind the wheel. Until now, almost all robocars out there have only gone out on public streets with a trained human driver behind the wheel, ready to take over at any sign of trouble. Waymo and a few others have done short demonstrations with no safety driver, but now an actual pilot, providing service to beta-testing members of the public, will operate without human supervision.
https://youtube.com/watch?v=aaOB-ErYq6Y%3Frel%3D0
This is a big deal, and indicates Waymo’s internal testing is showing a very strong safety record. The last time they published numbers, they had gone 83,000 miles between “required interventions.” While in safety driver training, we are told to intervene at any sign of a problem, these interventions are tested in simulation to find out what would have happened if there had been no intervention. If the car would have done the right thing, it’s not a required intervention.
Waymo must have built their number up a great deal from there. People have an accident that is reported to insurance about ever 250,000 miles, and to police every 500,000 miles. Injury accidents happen every 1.2M miles, and fatalities every 80M miles. In Waymo’s testing, where they got hit a lot by other drivers, they discovered that there are “dings” about every 100,000 miles that don’t get reported to police or insurance.
People have argued about how good you have to be to put a robocar on the road. You need to be better than all those numbers. I will guess that Waymo has gotten the “ding” number up above 500,000 miles — which is close to a full human lifetime of driving. Since they have only driven 3.5M miles they can’t make real-world estimates of the frequency of injuries and certainly not of fatalities, but they can make predictions. And their numbers have convinced them, and the Alphabet management, that it’s time to deploy.
Congratulations to all the team.
They did this not just with real world testing, but building a sophisticated simulator to test zillions of different situations, and a real world test track where they could test 20,000 different scenarios. And for this pilot they are putting it out on the calm and easy streets of Phoenix, probably one of the easiest places to drive in the world. Together, that gives the confidence to put “civilians” in the cars with no human to catch an error. Nothing will be perfect, but this vehicle should outperform a human driver. The open question will be how the courts treat that when the first problem actually does happen. Their test record suggests that may be a while; let us hope it is.
Where do we go from here?
This pilot should give pause to those who have said that robocars are a decade or more away, but it also doesn’t mean they are full here today. Phoenix was chosen because it’s a much easier target than some places. Nice, wide streets in a regular grid. Flat terrain. Long blocks. Easy weather with no snow and little rain. Lower numbers of pedestrians and cyclists. Driving there does not let you drive the next day in Boston.
But neither does it mean it takes you decades to go from Phoenix to Boston, or even to Delhi. As Waymo proves things out in this pilot, first they will prove the safety and other technical issues. Then they will start proving out business models. Once they do that, prepare for a land rush as they leap to other cities to stake the first claim and the first-mover advantage (if there is one, of course.) And expect others to do the same, but later than Waymo, because as this demonstrates, Waymo is seriously far ahead of the other players. It took Waymo 8 years to get to this, with lots of money and probably the best team out there. But it’s always faster to do something the 2nd time. Soon another pilot from another company will arise, and when it proves itself, the land rush will really begin.
The culmination of work by Alistair C. McConnell (lead-researcher) through his PhD and the SOPHIA team, the Soft Orthotic Physiotherapy Hand Interactive Aid (SOPHIA) forms the foundation for our future research into Soft Robotic rehabilitation systems.
Through Alistair’s research, it became apparent that there was a lack of stroke rehabilitation systems for the hand, that could be used in a domestic environment and monitor both physical and neural progress. Alistair conducted a thorough review of the literature to fully explore the state of the art, and apparent lack of this type of rehabilitation system. This review investigated the development of both Exoskeleton and End-Effector based systems to examine how this point was reached and what gaps and issues still occurred.
From this review and discussions with physiotherapists, we developed an idea for a brain machine controlled soft robotic system. The “Soft Orthotic Physiotherapy Hand Interactive Aid” (SOPHIA) needed to provide rehabilitation aid in two forms, passive and active:
• Passive rehabilitation, where the subject performs their exercises, and this is reflected in a 3D representation on a screen, and all the data is stored for analysis.
• Active rehabilitation, where the subject attempts to open their hand and if the full extension is not achieved in a designated time, the system provides the extra force needed.
Through a grant from the Newton Fund we developed the SOPHIA system, which consists of a soft robotic exoskeleton with a set of pneunets actuators providing the force for the fingers of a hand to be fully extended, and an electropneumatic control system containing the required diaphragm pumps, valves and sensors in a compact modular unit.
The inclusion of a Brain Machine Interface (BMI) allowed us to use motor imagery techniques, where the electroencephalogram signal from the subject could be used as a trigger for the extension motion of the hand, augmenting the active rehabilitation.
We designed the system to accept input from two different BMI devices, and compared a wired, high-end BMI with a low-cost, wireless BMI. By applying machine-learning approaches we were able to narrow down the differences in these two input systems, and our approach enabled the inexpensive system to perform at the same-level as the high-end system.
You can find further information on the SOPHIA system and the current state of the art in robotic devices and brain-machine interfaces for hand rehabilitation in our recent journal publications.