Archive 24.01.2018

Page 47 of 49
1 45 46 47 48 49

Sewing a mechanical future

SoftWear Automation’s Sewbot. Credit: SoftWear Automation

The Financial Times reported earlier this year that one of the largest clothing manufacturers, Hong Kong-based Crystal Group, proclaimed robotics could not compete with the cost and quality of manual labor. Crystal’s Chief Executive, Andrew Lo, emphatically declared, “The handling of soft materials is really hard for robots.” Lo did leave the door open for future consideration by acknowledging such budding technologies as “interesting.”

One company mentioned by Lo was Georgia Tech spinout, Softwear Automation. Softwear made news last summer by announcing its contract with an Arkansas apparel factory to update 21 production lines with its Sewbot automated sewing machines. The factory is owned by Chinese manufacturer Tianyuan Garments, which produces over 20 million T-shirts a year for Adidas. The Chairman of Tianyuan, Tang Xinhong, boasted about his new investment, saying “Around the world, even the cheapest labor market can’t compete with us,” when Sewbot brings down the costs to $0.33 a shirt.

The challenge for automating cut & sew operations to date has been the handling of textiles which come in a seemingly infinite number of varieties that stretch, skew, flop and move with great fluidity. To solve this problem, Softwear uses computer vision to track each individual thread. According to its issued patents, Softwear developed a specialized camera which captures  threads at 1,000 frames per second and tracks their movements using proprietary algorithms. Softwear embedded this camera around robot end effectors that manipulate the fabrics similar to human fingers. According to a description on IEEE Spectrum these “micromanipulators, powered by precise linear actuators, can guide a piece of cloth through a sewing machine with submillimeter precision, correcting for distortions of the material.” To further ensure the highest level of quality, Sewbot uses a four-axis robotic arm with a vacuum gripper that picks and places the textiles on a sewing table with a 360-degree conveyor system and spherical rollers to quickly move the fabric panels around.

Softwear’s CEO, Palaniswamy “Raj” Rajan, explained, “Our vision is that we should be able to manufacture clothing anywhere in the world and not rely on cheap labor and outsourcing.” Rajan appears to be working hard towards that goal, professing that his robots are already capable of reproducing more than 2 million products sold at Target and Walmart. According to IEEE Spectrum, Rajan further asserted in a press release that at the end of 2017, Sewbot will be on track to produce “30 million pieces a year.” It is unclear if that objective was ever met. Softwear did announce the closing of its $7.5 million financing round by CTW Venture Partners, a firm of which Rajan is also the managing partner.

Softwear Automation is not the only company focused on automating the trillion-dollar apparel industry. Sewbo has been turning heads with its innovative approach to fabric manipulation. Unlike Softwear, which is taking the more arduous route of revolutionizing machines, Sewbo turns textiles into hardened substances that are easy for off-the-shelf robots and existing sewing appliances to handle. Sewbo’s secret sauce, literally, is a water-soluble thermal plastic or stiffening solution that turns cloth into a cardboard-like material. Blown away by its creativity and simplicity, I sat down with its inventor Jonathan Zornow last week to learn more about the future of fashion automation.

After researching the best polymers to laminate safely onto fabrics using a patent-pending technique, Zornow explained that last year he was able to unveil the “world’s first and only robotically-sewn garment.” Since then, Zornow has been contacted by almost every apparel manufacturer (excluding Crystal) to explore automating their production lines. Zornow has hit a nerve with the industry, especially in Asia, that finds itself in the labor management business with monthly attrition rates of 10% and huge drop offs after Chinese New Year. Zornow shared that “many factory owners were in fact annoyed that they couldn’t buy the product today.”

Zornow believes that automation technologies could initially be a boom for bringing small-batch production back to the USA, prompting an industry of “mass customization” closer to the consumer. As reported in 2015, apparel brands have been moving manufacturing back from China with 3D-printing technologies for shoes and knit fabrics. Long-term, Zornow said, “I think that automation will be an important tool for the burgeoning reshoring movement by helping domestic factories compete with offshore factories’ lower labor costs. When automation becomes a competitive alternative, a big part of its appeal will be how many headaches it relieves for the industry.”

To date, fulfilling the promise of “Made In America” has proven difficult, as Zornow explained we have forgotten how to make things here. In a recent report by the American Apparel & Footwear Association, US apparel manufacturing fell from 50% in 1994 to roughly 3% in 2015, meaning 97% of clothing today is imported. For example, Zornow shared with me how his competitor was born. In 2002, the US Congress passed the Berry Amendment requiring the armed services to make uniforms domestically, which led DARPA to grant $1.75 million to a Georgia Tech team to build a prototype of an automated sewing machine. As Rajan explains, “The Berry Amendment went into effect restricting the military from procuring clothing that was not made in the USA. Complying with the rule proved challenging due to a lack of skilled labour available in the US that only got worse as the current generation of seamstresses retired with no new talent to take their place. It was under these circumstances that the initial idea for Softwear was born and the company was launched in 2012.”

I first met Zornow at RoboBusiness last September when he demonstrated for a packed crowd how Sewbo is able to efficiently recreate the 10-20 steps to sew a T-shirt. However, producing a typical mens dress shirt can require up to 80 different steps. Zornow pragmatically explains the road ahead, “It will be a very long time, if ever, before things are 100% automated.” He points to examples of current automation in fabric production, such as dyeing, cutting and finishing which augment manual labor. Following this trend “they’re able to leverage machines to achieve incredible productivity, to the point where the labor cost to manufacture a yard of fabric is usually de minimis.” Zornow foresees a future where his technology is just another step in the production line as forward-thinking factories are planning two decades ahead, recognizing that in order “to stay competitive they need new technologies” like Sewbo.

Best practices in designing effective roadmaps for robotics innovation

In the past decade, countries and regions around the globe have developed strategic roadmaps to guide investment and development of robotic technology. Roadmaps from the US, South Korea, Japan and EU have been in place for some years and have had time to mature and evolve. Meanwhile roadmaps from other countries such as Australia and Singapore are just now being developed and launched. How did these strategic initiatives come to be? What do they hope to achieve? Have they been successful, and how do you measure success?

To explore these issues, former Robohub Editor Hallie Siegel and Open Roboethics Institute (ORi) Co-founder and Director AJung Moon invited researchers, policymakers, and industry members who have played a significant role in launching and shaping major strategic robotics initiatives in their regions to participate in an IROS 2017 workshop “Best practices in designing effective roadmaps for robotics innovation” to see what is working, what is not, and to uncover what best practices in roadmap development (if any) might be broadly applied to other regions.

Specifically, the workshop sought to examine the process of how these policy frameworks came to be created in the first place, how they have been tailored to local capabilities and strengths, and what performance indicators are being used to measure their success — so that participants could draw from international collective experience as they design and evaluate strategic robotics initiatives for their own regions.
The highlight of the workshop was a pair of panel discussions moderated by robotics ecosystem expert Andra Keay, Managing Director of the robotics industry cluster association Silicon Valley Robotics.
The first panel — featuring Peter Corke (QUT, Australia), Kyung-Hoon Kim (Ministry of Trade, Industry & Energy (MOTIE), South Korea), Rainer Bischoff (euRobotics AISBL, EU), Dario Floreano (NCCR Robotics, EPFL, Switzerland), Sue Keay (Queensland University of Technology, Australia), and Raj Madhavan (Humanitarian Robotics Technologies,LLC, USA) — was an in-depth discussion of regional differences in roadmap motivation, leadership, and strategy design from the perspectives of government, industry and academia.
The second panel — featuring Raja Chatila (IEEE Global Initiative for Ethical Considerations
in Artificial Intelligence & Autonomous Systems), AJung Moon (Open Roboethics Institute),
Alex Shikany (Robotic Industries Association), and Sabine Hauert (Robohub.org) — covered the ways in which issues such as roboethics, public perception and the fear of job loss due to automation are influencing robotics policy in different regions around the globe.

Lenka Pitonakova was part of the audience and provides her impressions below.

State-of-the-art in different countries

The workshop started with presentations on the process of designing roadmaps for robotics research and development. Individual speakers discussed best practises and problems encountered in their respective countries.

Dr. Rainer Bischoff (euRobotics, EU)

EU flag
Robotics research is in a very mature stage in the EU. A network of robotics researchers and companies closely cooperate on issues such as using robots in healthcare, logistics, as well as for maintenance and inspection of infrastructure. The partnership between the European Commission, and European industry and academia is called SPARC. Decisions about relevant research areas to fund in the European Commission’s H2020 programme are made in a bottom-up fashion – meaning stakeholders get to influence priority areas for funding. This is done through topic groups in robotics established throughout the EU, which help shape the Robotics Multi-annual Roadmap which in turn shapes the work programme of the European Commission. Public outreach is also very important in the EU. Not only are all funding decisions openly available to the public, researchers are encouraged to perform outreach activities.

Dr. Dario Floreano (NCCR Robotics and EPFL, Switzerland)

Switzerland flag
One main source of funding in Switzerland currently comes from the NCCR scheme, which is a 12-year programme that started in 2010 and has four target areas: research, technology transfer (especially when it comes to creating start-ups), education and outreach. A part of this programme is also structural change in the research institutions of Switzerland. Since 2010, new robotics centres have been created and many new professors have been appointed. Switzerland takes a very pro-active approach to applied research, and technology transfer is as important for them as research itself. The most important areas of interest include wearable robots, mobile robots for search and rescue, human-robot interaction and teleoperation, as well as educational robots that can teach computational thinking.

Sue Keay (Australian Centre for Robotic Vision, Australia)

Australia flag
Australia is currently behind the Western countries when it comes to automation. This is mostly because there is no overreaching body that could unify different research groups and provide substantial research funding. A plan towards creating a centralised institution to support research is currently being formed and efforts are underway to persuade the government about the importance of investing into robotics. To this end, the ICRA 2018 conference, which will take place in Brisbane, Australia, is an important event for the Australian robotics community. Among the focus areas for future research, mining, manufacturing, defence and healthcare have been identified as the most important.

Dr. Kyung-Hoon Kim (Ministry of Trade, Industry & Energy, South Korea)
South Korea flag

The government of South Korea has controlled the robotics research focus via the Intelligent Robots Development and Promotion Act since 2008. Every five years, the roadmap for research is re-visited by government experts and new funding areas are identified. The areas of interest include machine learning and big data analysis, human-robot collaboration, development of hardware parts and software platforms, as well as application of robotics in smart factories.

Dr. Raj Madhavan (Humanitarian Robotics Technologies, USA and India)

India flag

Researchers in India currently do not get much support from the government when it comes to robotics research, although some work on a research roadmap for robotics started in early 2017. However, the lack of government support is not the only problem that India faces. There also seems to be a lack of interest and commitment from individual researchers to unite their efforts and collaborate on a national level.

Research roadmaps and funding

In a panel discussion that followed the presentations, the following stakeholders in research and innovation were identified:

  • Governments: Provide investment money and shape regulations
  • Academia: Provides the foresight into new technologies
  • Industry: Creates the need for research and applies it.

A crucial factor that influences the interest from government and industry was idenfied: The ability of researchers to provide estimates about the economic impact of their work. Especially in the EU, robotics started growing as a research field when industry became more involved in funding and roadmap creation.

Secondly, it was mentioned that engaging the government and the public is also very important. Because of the ability of government regulations to stop technology from being developed and used, politicians should be engaged with new ideas and with how they will shape the society. On the other hand, end-users, i.e., the public, also need to understand the impact of new technology on their lives, both to encourage the use of new technology and to mitigate fears of its negative impacts. However, engaging all stakeholders and making research and development relevant to all of them is often very difficult because of differences in opinions and long-term goals.

Challenges for adoption of robotics technologies


The second panel discussion focused on challenges for adoption of robotic technologies.

Public acceptance and uncertainty about the impact of technology on well-being of people and society as a whole, as well as the fear of loosing control of autonomous systems, were identified as the most important topics to address. To mitigate these fears, it is useful to provide the public with statistics on how technology impacted jobs in the recent past, as well as to provide well-informed projections for the near future.

It is crucial for scientists to be trained in and to apply best practice in Public communication, especially as the commentary on new technology often comes from the media, where non-experts (consciously or not) misinform the public about the impacts and capabilities of new technology.

CES 2018: Robots, AI, massive data and prodigious plans

This year’s CES was a great show for robots. “From the latest in self-driving vehicles, smart cities, AI, sports tech, robotics, health and fitness tech and more, the innovation at CES 2018 will further global business and spur new jobs and new markets around the world,” said Gary Shapiro, president and CEO, CTA.

But with that breath of coverage and an estimated 200,000 visitors, 7,000 media, 3,000 exhibitors, 900 startups, 2.75 million sq ft of floorspace at two convention centers, hospitality suites in almost every major hotel on the Las Vegas Strip, over 20,000 product announcements and 900 speakers in 200 conference sessions, comes massive traffic (humans, cars, taxis and buses), power outages, product launch snafus and humor:

AI, big data, Amazon, Google, Alibaba and Baidu

“It’s the year of A.I. and conversational interfaces,” said J. P. Gownder, an analyst for Forrester Research, “particularly advancing those interfaces from basic conversations to relationships.” Voice control of almost everything from robots to refrigerators was de rigueur. The growing amount of artificial intelligence software and the race between Amazon, Google and their Chinese counterparts Alibaba and Baidu to be the go-to service for integration was on full display. Signs advertised that products worked with Google Assistant or Amazon’s Alexa or both, or with Duer-OS (Baidu’s conversational operating system) but, by the sheer number of products that worked with the Alexa voice assistant, Amazon appeared to dominate.

“It’s the year when data is no long static and post processed” said Brian Krzanich, Intel’s CEO.  He also said: “The rise of autonomous cars will be the most ambitious data project of our lifetime.” In his keynote presentation he demonstrated the massive volume of data involved in the real-time processing needs of autonomous cars, sports events, smart robotics, mapping data collection and a myriad other sources of data-driven technology on the horizon.

Many companies were promoting their graphics, gaming, and other types of processors. IBM had a large invite-only room to show their Quantum Computer. Their 50-qubit chip is housed in that silver canister at the bottom of the thing/computer. Not shown is the housing which keeps the device super cool. IBM is making the computer available via the cloud to 60,000 users working on 1.7 million experiments as well as commercial partners in finance, materials, automotive and chemistry. (Intel showed their 49-qubit chip code-named Tangle Lake in a segment in Krzanich’s keynote.)

Robots, robotics and startups

Robots were everywhere ranging from ag bots, tennis bots, drones, robot arms, robot prosthetics and robot wheelchairs, to smart home companions, security robots and air, land and sea drones. In this short promotional video produced by CES, one can see the range of products on display. Note the quantity of Japanese, Chinese and Korean manufacturers.

One of the remarkable features of CES is what they call Eureka Park. It’s a whole floor of over 900 startup booths with entrepreneurs eager to explain their wares and plans. The area was supported by the NSF, Techstars and a host of others. It’s a bit overwhelming but absolutely fascinating.

Because it’s such a spread-out show, locations tend to blur. But I visited all the robotics and related vendors I could find in my 28,000-step two-day exploration and the following are ones that stuck out from the pack:

  • LiDAR and camera vision systems providers for self-driving vehicles, robots, and automation such as Velodyne, Quanergy and Luminar were everywhere showing near and far detection ranges, wide, narrow and 360° fields of view, and solid state or other conventional as well as software to consolidate all that data and make it meaningful.
    • Innoviz Technologies, an Israeli startup that has already raised $82 million, showed their solid state device (Pro) available now and their low-cost automotive grade product (One) available in 2019.
    • Bosch-supported Chinese startup Roadstar.ai is developing Level 4 multi-sensor fusion solutions (cameras, LiDARs, radars, GPS and others).
    • Beijing Visum Technology, another Chinese vision system startup, but this one uses what they called ‘Natural Learning’ to continually improve what is seen by their ViEye, stereoscopic, real-time vision detection system for logistics and industrial automation.
    • Korean startup EyeDea displayed both a robot vision system and a smart vision module (camera and chip) for deep learning and obstacle avoidance for the auto industry.
    • Occipital, a Colorado startup developing depth sensing tech using twin infrared shutter cameras for indoor and outdoor for scanning and tracking.
    • Aeolus Robotics, a Chinese/American startup working on a $10,000 home robot with functional arms and hands and comms and interactivity that are similar to what IBM’s Watson offers, appeared to be focused toward selling their system components: object recognition, facial/pedestrian recognition, deep learning perception systems and auto safety cameras which interpret human expressions and actions such as fatigue.
  • SuitX, a Bay Area exoskeleton spin-off from Ekso Bionics, focused on providing modular therapeutic help for people with limited mobility rather than industrial uses of assistive devices. Ekso, which wasn’t at CES, is providing assistive systems that people strap into to make walking, lifting and stretching easier for employees and the military.
  • There were a few marine robots for photography, research and hull inspection:
    • Sublue Underwater AI, a Chinese startup, had a tank with an intelligent ROV capable of diving down to 240′ while sending back full HD camera and sensor data. They also make water tows.
    • RoboSea, also a Chinese startup, was showing land and sea drones for entertainment, research and rescue and photography.
    • QYSea, a Chinese startup making a 4k HD underwater camera robot which can go to a depth of 325′.
    • CCROV, a brand of the Vxfly incubater of China’s Northwestern Polytechnical University, demonstrated a 10-pound tethered camera box with thrusters that can dive more than 300′. The compact device is designed for narrow underwater areas and dangerous environments for people.
    • All were well-designed and packaged as consumer products.
  • UBTech, a Chinese startup that is doing quite well making small humanoid toy robots including their $300 StarWars StormTrooper showed that their line of robots are growing from toys to service robots. They were demonstrating the first video-enabled humanoid robot with an Amazon Alexa communication system touting its surveillance capabilities and Avatar modes for this little (17″ tall) walking robot. It not only works with Alexa but can also get apps and skills and accept controls from iOS and Android. Still, like many of the other home robots, they can’t grasp objects, consequently they can’t perform services beyond remote presence and Alexa-like skills.
  • My Special Aflac Duck won the CES Best Unexpected Product Award for a social robot designed to look like the white Aflac duck but also designed to help children coping with cancer. This is the second healthcare-related robotic device for kids from Sproutel, the maker of the duck. Their first product was Jerry the Bear for diabetic kids.
  • YYD Robo, another Chinese startup, was demonstrating a line of family companion, medical care robots (although the robot’s hands could not grasp), which also served as child care and teaching robots. This Shenzhen-based robot company says they are fully-staffed and have a million sq ft of manufacturing space, yet the robots weren’t working and their website doesn’t come up.
  • Hease Robotics, a French mobile kiosk startup, showed their Heasy robotic kiosk as a guide for retail stores, office facilities and public areas. At CES, there were many vending machine companies showing how they are transitioning to smart machines – and in some cases, smart and robotic as the Heasy robotic kiosk.
  • Haapie SAS, also a French startup, was showing their tiny interactive, social and cognitive robots – consumer entertainment and Alexa-like capabilities. Haapie also integrates their voice recognition, speech synthesis and content management into smartphone clients.
  • LG, the Korean consumer products conglomerate, showed an ambitious line of robot products they called CLOi. One is an industrial floor cleaner for public spaces which can also serve as a kiosk/guide; another has a build-in tray for food and drink delivery in hotels; and a third can carry luggage and will follow customers around stores, airports and hotels. During a press conference, one of the robots tipped over and another wouldn’t hear instructions. Nevertheless all three were well-designed and purposed and the floor cleaner and porter robots are going to help out at the airport for next month’s Winter Olympics.
  • Two Chinese companies showed follow-me luggage: 90FUN and their Puppy suitcase and ForwardX and their smart suitcases. 90FUN is using Ninebot/Segway follow-me technology for their Puppy.
  • Twinswheel, a French startup, was showing a prototype of their parcel delivery land drone for factories, offices and last-mile deliveries.
  • Dobot is a Chinese inventor/developer of a transformable robot and 3D printer for educational purposes and a multi-functional industrial robot arm. They also make a variety of vision and stabilizing systems which are incorporated into their printers and robots. It’s very clever science. They even have a laser add-on for laser cutting and/or engraving. Dobot had a very successful Kickstarter campaign in 2017.
  • Evolver Robots is a Chinese developer of mobile domestic service robots specifically designed for children between the ages of 4-12 offering child education, video chat, games, mobile projection and remote control via smartphone.
  • Although drones had their own separate area at the show, there were many locations where I found them. From the agricultural spraying drones by Yamaha, DJI and Taiwanese Geosat Aerospace to the little deck-of-cards-sized ElanSelfie or the fold-into-a-5″ X 1/2″ high AEE Aviation selfie drone from Shenzhen, nothing stood out amongst the other 40+ drone vendors to compete with the might of DJI (which had two good-sized booths in different locations).
  • Segway (remember Dean Kamen?) is now fully owned by Ninebot, a Beijing provider of all types of robotic-assisted self-balancing scooters and devices. They are now focused on lifestyle and recreational riders in the consumer market including the Loomo which you ride like a hoverboard and then load it up with cargo and have it follow you home or to another place in the facility. At their booth they were pushing the Loomo for logistics operations however it can’t carry much and has balance problems. They would do better having it tow a cart.
  • Yujin Robot, a large Korean maker of robotic vacuums, educational robots, industrial robots, mobility platforms for research and a variety of consumer products, was showing their new logistics transport system GoCart with three different configurations of autonomous point-to-point robots.
  • The Buddy robot from Blue Frog Robotics won CES’s Robotics and Drones Innovation Award along with Soft Robotics and their grippers and control system that can pick items of varying size, shape and weight with a single device. Jibo, the Dobot (mentioned above) and 14 others also received the Robotics and Drones Innovation Award.

[The old adage in robotics that for every service robot there is a highly-skilled engineer by its side is still true… most of the social and home robots frequently didn’t work at the show and were either idle or being repaired.]

Silly things

    • A local strip club promoted their robotic pole dancers (free limo).
    • At a hospitality suite across from the convention center, the head of Harmony, the sex robot by San Diego area Abyss Creations (RealDoll), was available for demos and interviews. It will begin shipping this quarter at $8-$10,000.
    • Crowd-drawing events were everywhere but this one drew the largest audiences: Omron’s ping-pong playing robot.
    • FoldiMate, a laundry folding robot, requires a human to feed the robot one article at a time for it to work (for just $980 in late 2019). Who’s the robot?
    • And Intel’s drones flew over the Bellagio hotel fountains in sync with the water and light musical show. Very cool.
    • ABC’s Shark Tank, the hit business-themed funding TV show, was searching for entrepreneurs with interesting products at an open call audition area.

Bottom Line

Each time I return from a CES (I’ve been to at least six) I swear I’ll never go again. It’s exhausting as well as overwhelming. It’s impossible to get to all the places one needs to go — and it’s cold. Plus, once I get there, the products are often so new and untested, that they fail or are over-presented with too much hype. (LG’s new CLOi products failed repeatedly at their press conference; Sony’s Aibo ignored commands at the Sony press event.)

I end up asking myself “Is this technology really ready for prime time? Will it be ready by their promised delivery dates? Or was it all just a hope-fest? A search for funding?” I still have no answer… perhaps all are true; perhaps that’s why I keep going. It’s as if my mind sifted through all the hype and chaff and ended up with what’s important. There’s no doubt this show was great for robotics and that Asian (particularly Chinese) vendors are the new power players. Maybe that’s why I copied down the dates for CES 2018.

The GM/Cruise robocar interior is refreshingly spartan

GM revealed photos of what they say is the production form of their self-driving car based on the Chevy Bolt and Cruise software. They say it will be released next year, making it almost surely the first release from a major car company if they make it.

As reported in their press release and other sources their goals are ambitious. While Waymo is the clear leader, it has deployed in Phoenix, because that is probably the easiest big city for driving in the USA. Cruise/GM claims they have been working on the harder problem of a dense city like San Francisco.

What’s notable, though, about the Cruise picture is what’s not in it, namely much in the way of dashboard or controls. There is a small screen, and a few controls but little else. Likewise the Waymo 3rd generation “Firefly” car had almost no controls at all.

The car has climate controls and window controls and little else. Of course a touchscreen can control a lot of other things, especially when the “driver” does not have to watch the road.

Combine this with the concept Smart-car self-driving interior from Daimler below, and you see a change in the thinking of the car industry towards the thinking of the high-tech industry.

At most car conferences today, a large fraction of what you see is related to putting new things inside the car — fancier infotainment tools and other aspects of the “connected car.” These are the same companies who charge you $2,000 to put a navigation system into your car that you will turn off in 2 years because your phone does a better job.

It is not surprising that Google expects you to get your music, entertainment and connectivity from the device in your pocket — they make Android. It is a bigger step for GM and Daimler to realize this, and bodes well for them.

While the car is not finalized and the software certainly isn’t, GM feels it has settled on the hardware design of its version 1 car, and is going into production on it. I suspect they will make changes and tweaks as new sensors come down the pipeline, but traditionally car companies have always locked down the main elements of the hardware design a couple of years before a vehicle is available for sale.

One thing both of these cars need more of, which I know well from road tripping, is surfaces and places to put stuff. Cupholders and door pockets aren’t enough when a car is a work or relaxation station rather than something to drive.

What is not clear is if they have been bold enough to get rid of many of the other features not needed if you’re not driving, like fancy adjustable seats. The side-view mirrors are gone but sensors are in their place (it is widely anticipated that they will allow even human driven cars to replace the mirrors with cameras, since that’s better and lower drag.) Waymo’s firefly had the mirrors because the law still demands them.

GM is also already working with NHTSA to get this car an exception to the Federal Motor Vehicle Safety Standards which require the things like the steering wheel that they took out. The feds say they will work quickly on this so it seems likely. Several states are already preparing the legal regime necessary, and GM suggests it will deploy something in 2019.

Not too long ago, I would have ranked GM very far down on the list of carmakers likely to succeed in the robocar world. After they acquired Cruise they moved up the chart, but frankly I had been skeptical about how much a small startup, no matter how highly valued, could change a giant automaker. It now seems that intuition was wrong.

Engineers design artificial synapse for “brain-on-a-chip” hardware

From left: MIT researchers Scott H. Tan, Jeehwan Kim, and Shinhyun Choi
Image: Kuan Qiao

By Jennifer Chu

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses — the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

The design, published today in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow. 

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

#252: Embedded Platform for SLAM, with Zhe Zhang



In this episode Abate talks with Zhe Zhang from Perceptin where they are building embedded platforms for robots to do Simultaneous Localization and Mapping (SLAM) algorithms in real time. Zhe explains the methods they incorporate such as sensor fusion and hardware synchronization to make a highly accurate SLAM platform for IOT, consumer, and automotive grade robots.

Zhe Zhang

Zhe is the co-founder and CEO of PerceptIn. Prior to founding PerceptIn, he worked at Magic Leap in Silicon Valley and prior to that he worked for five years at Microsoft. Zhang has a PhD in Robotics from the State University of New York and an undergraduate degree from Tsinghua University.

 

 

 

Further research recommended by Zhe:

  1. Probabilistic Data Association for Semantic SLAM
  2. A Comparative Analysis of Tightly-coupled Monocular, Binocular, and Stereo VINS
  3. Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art

Links

Robots in Depth with Erin Rapacki


In this episode of Robots in Depth, Per Sjöborg speaks with Erin Rapacki about how the FIRST robotics competition was a natural and inspiring way into her career spanning multiple robotics companies.

Erin talks about her work marketing for several startups, including selling points, examples from different industries, and launching a robotics product. She also shares her game plan for starting a new robotics company and insight on the startup and venture capital landscape.

Tons of LIDARs at CES 2018

When it comes to robocars, new LIDAR products were the story of CES 2018. Far more companies showed off LIDAR products than can succeed, with a surprising variety of approaches. CES is now the 5th largest car show, with almost the entire north hall devoted to cars. In coming articles I will look at other sensors, software teams and non-car aspects of CES, but let’s begin with the LIDARs.

Velodyne

When it comes to robocar LIDAR, the pioneer was certainly Velodyne, who largely owned the market for close to a decade. Their $75,000 64-laser spinning drum has been the core product for many years, while most newer cars feature their 16 and 32 laser “puck” shaped unit. The price was recently cut in half on the pucks, and they showed off the new $100K 128 laser unit as well as a new more rectangular unit called the Veloray that uses a vibrating mirror to steer the beam for a forward view rather than a 360 view.

The Velodyne 64-laser unit has become such an icon that its physical properties have become a point of contention. The price has always been too much for any consumer car (and this is also true of the $100K unit of course) but teams doing development have wisely realized that they want to do R&D with the most high-end unit available, expecting those capabilities to be moderately priced when it’s time to go into production. The Velodyne is also large, heavy, and because it spins, quite distinctive. Many car companies, and LIDAR companies have decried these attributes in their efforts to be different from Waymo (which uses their own in-house LIDAR now) and Velodyne. Most products out there are either clones of the Velodyne, or much smaller units with a 90 to 120 degree field of view.

Quanergy

I helped Quanergy get going, so I have an interest and won’t comment much. Their recent big milestone is going into production on their 8 line solid state LIDAR. Automakers are very big on having few to no moving parts so many companies are trying to produce that. Quanergy’s specifications are well below the Velodyne and many other units, but being in production makes a huge difference to automakers. If they don’t stumble on the production schedule, they will do well. With lower resolution instruments with smaller fields of view, you will need multiple units, so their cost must be kept low.

Luminar and 1.5 micron

The hot rising star of the hour Luminar, with its high performance long range LIDAR. Luminar is part of the subset of LIDAR makers using infrared light in the 1.5 micron (1550nm) range. The eye does not focus this light into a spot on the retina, so you can emit a lot more power from your laser without being dangerous to the eye. (There are some who dispute this and think there may be more danger to the cornea than believed.)

The ability to use more power allows longer range, and longer range is important, particularly getting a clear view of obstacles over 200m away. At that range, more conventional infrared lid Ar in the 900nm band has limitations, particularly on dark objects like a black car or pedestrian in black clothing. Even if the limitations are reduced, as some 900nm vendors claim, that’s not good enough for most teams if they are trying to make a car that goes at highway speeds.

At 80mph on dry pavement, you can hard brake in 100m. On wet pavement it’s 220m to stop, so you should not be going 80mph but people do. But a system, like a human, needs time to decide if it needs to brake, and it also doesn’t want to brake really hard. It usually takes at least 100ms just to receive a frame from sensors, and more time to figure out what it all means. No system figures it out from just one frame, usually a few frames have to be analyzed.

On top of that, 200m out, you really only get a few spots on a LIDAR from a car, and one spot from a pedestrian. That first spot is a warning but not enough to trigger braking. You need a few frames and want to see more spots and get more data to know what you’re seeing. And a safety margin on top of that. So people are very interested in what 1.5 micron LIDAR can see — as well as radar, which also sees that far.

The problem with building 1.5 micron LIDAR is that light that red is not detected with silicon. You need more exotic stuff, like indium gallium arsenide. This is not really “exotic” but compared to silicon it is. Our world knows a lot about how to do things with silicon, and making things with it is super mature and cheap. Anything else is expensive in comparison. Made in the millions, other materials won’t be.

The new Velodyne has 128 lasers and 128 photodetectors and costs a lot. 128 lasers and detectors for 1.5 micron would be super costly today. That’s why Luminar’s design uses two modules, each with a single laser and detector. The beam is very powerful, and moved super fast. It is steered by moving a mirror to sweep out rasters (scan lines) both back and forth and up and down. The faster you move your beam the more power you can put into it — what matters is how much energy it puts into your eye as it is sweeping over it, and how many times it hits your eye every second.

The Luminar can do a super detailed sweep if you only want one sweep per second, and the point clouds (collections of spots with the distance and brightness measured, projected into 3D) look extremely detailed and nice. To drive, you need at least 10 sweeps per second, and so the resolution drops a lot, but is still good.

Another limit which may surprise you is the speed of light. To see something 250m away, the light takes 1.6 microseconds to go out and back. This limits how many points per second you can get from one laser. Speeding up light is not an option. There are also limits on how much power you can put through your laser before it overheats.

(To avoid overheating, you can concentrate your point budget on the regions of greatest interest, such as those along the road or those that have known targets. I describe this in This US patent. While I did not say it, several people reported to me that Luminar’s current units require a fairly hefty box to deliver power to their LIDAR and do the computational post processing to produce the pretty point clouds.


Luminar’s device is also very expensive (they won’t publish the price) but Toyota built a test vehicle with 4 of their double-laser units, one in each of the 4 directions. Some teams are expressing the desire to see over 200m in all directions, while others think it is really only necessary to see that far in the forward direction.

You do have to see far to the left and right when you are entering an intersection with cross traffic, and you also have to see far behind if you are changing lanes on a German autobahn and somebody might be coming up on your left 100km/h faster than you (it happens!) Many teams feel that radar is sufficient there, because the type of decision you need to make (do I go or don’t I) is not nearly so complex and needs less information.

As noted before, while most early LIDARS were in the 900nm bands, Google/Waymo built their own custom LIDAR with long range, and the effort of Uber to build the same is the subject of the very famous lawsuit between the two companies. Princeton Lightwave, another company making 1.5 micron LIDAR, was recently acquired by Ford/Argo AI — an organization run by Bryan Salesky, another Google car alumnus.

I saw a few other companies with 1.5 micron LIDARS, but not as far along as Luminar, but several pointing out they did not need the large box for power and computing, suggesting they only needed about 20 to 40 watts for the whole device. One was Innovusion, which did not have a booth, but showed me the device in their suite. However, in the suite it was not possible to test range claims.

Tetravue and new Time of Flight

Tetravue showed off their radically different time of flight technology. So far there have been only a few methods to measure how long it takes for a pulse of light to go out and come back, thus learning the distance.


The classic method is basic sub-nanosecond timing. To get 1cm accuracy, you need to measure the time at close to about 50 picosecond accuracy. Circuits are getting better than that. This needs to be done with both scanning pulses, where you send out a pulse and then look in precisely that direction for the return, or with “flash” LIDAR where you send out a wide, illuminating pulse and then have an array of detector/timers which count how long each pixels took to get back. This method works at almost any distance.

The second method is to use phase. You send out a continuous beam but you modulate it. When the return comes back, it will be out of phase with the outgoing signal. How much out of phase depends on how long it took to come back, so if you can measure the phase, you can measure the time and distance. This method is much cheaper but tends to only be useful out to about 10m.

Tetravue offers a new method. They send out a flash, and put a decaying (or opening) shutter in front of an ordinary return sensor. Depending on when the light arrives, it is attenuated by the shutter. The amount it is attenuated tells you when it arrived.

I am interested in this because I played with such designs myself back in 2011, instead proposing the technique for a new type of flash camera with even illumination but did not feel you could get enough range. Indeed, Tetravue only claims a maximum range of 80m, which is challenging — it’s not fast enough for highway driving or even local expressway driving, but could be useful for lower speed urban vehicles.

The big advantages of this method are cost — it uses mostly commodity hardware — and resolution. Their demo was a 1280×720 camera, and they said they were making a 4K camera. That’s actually too much resolution for most neural networks, but digital crops from within it could be very happy, and make for the best object recognition result to be found, at least on closer targets. This might be a great tool for recognizing things like pedestrian body language and more.

At present the Tetravue uses light in the 800nm bands. That is easier to receive with more efficiency on silicon, but there is more ambient light from the sun in this band to interfere.

The different ways to steer

In addition to differing ways to measure the pulse, there are also many ways to steer it. Some of those ways include:

  • Mechanical spinning — this is what the Velodyne and other round lidars do. This allows the easy making of 360 degree view LIDARs and in the case of the Velodyne, it also stops rain from collecting on the instrument. One big issues is that people are afraid of the reliability of moving parts, especially on the grand scale.
  • Moving, spinning or vibrating mirrors. These can be sealed inside a box and the movement can be fairly small.
  • MEMS mirrors, which are microscopic mirrors on a chip Still moving, but effectively solid state. These are how DLP projectors work. Some new companies like Innovision featured LIDARs steered this way.
  • Phased arrays — you can steer a beam by having several emitters and adjusting the phase so the resulting beam goes where you want it. This is entirely solid state.
  • Spectral deflection — it is speculated that some LIDARS do vertical steering by tuning the frequency of the beam, and then using a prism so this adjusts the angle of the beam.
  • Flash LIDAR, which does not steer at all, and has an array of detectors

There are companies using all these approaches, or combinations of them.

The range of 900nm LIDAR

The most common and cheapest LIDARS, are as noted, in the 900 nm wavelength band. This is a near infrared band, but it is far enough from visible that not a lot of ambient light interferes. At the same time, up here, it’s harder to get silicon to trigger on the photons, so it’s a trade-off.

Because it acts like visible light and is focused by the lens of the eye, keeping eye safe is a problem. At a bit beyond 100m, at the maximum radiation level that is eye safe, fewer and fewer photons reflect back to the detector from dark objects. Yet many vendors are claiming ranges of 200m or even 300m in this band, while others claim it is impossible. Only a hands-on analysis can tell how reliably these longer ranges can actually be delivered, but most feel that can’t be done at the level needed.

There are some tricks which can help, including increasing sensitivity, but there are physical limits. One technique that is being considered is dynamic adjustment of the pulse power, reducing it when the target is close to the laser. Right now, if you want to send out a beam that you will see back from 200m, it needs to be so powerful that it could hurt the eye of somebody close to the sensor. Most devices try for physical eye-safety; they don’t emit power that would be unsafe to anybody. The beam itself is at a dangerous level, but it moves so fast that the total radiation at any one spot is acceptable. They have interlocks so that the laser shuts down if it ever stops moving.

To see further, you would need to detect the presence of an object (such as the side of a person’s head) that is close to you, and reduce the power before the laser scanned over to their eyes, keeping it low until past the head, then boosting it immediately to see far away objects behind the head. This can work, but now a failure of the electronic power control circuits could turn the devices into a non-safe one, which people are less willing to risk.

The Price of LIDARs

LIDAR prices are all over the map, from the $100,000 Velodyne 128-line to solid state units forecast to drop close to $100. Who are the customers that will pay these prices?

High End

Developers working on prototypes often chose the very best (and thus most expensive) unit they can get their hands on. The cost is not very important on prototypes, and you don’t plan to release for a few years. These teams make the bet that high performance units will be much cheaper when it’s time to ship. You want to develop and test with the performance you can buy in the future.’

That’s why a large fraction of teams drive around with $75,000 Velodynes or more. That’s too much for production unit but they don’t care about that. It made people predict incorrectly that robocars are far in the future.

Middle End

Units in the $7,000 to $20,000 range are too expensive as an add-on feature for a personal car. There is no component of a modern car that is this expensive, except the battery pack in a Tesla. But for a taxi, it’s a different story. With so many people sharing it, the cost is not out of the question compared to the human driver which is taken out of the equation. In fact, some would argue the $100,000 LIDAR over 6 years is still cheaper than that.

In this case, cost is not the big issue, it’s performance. Everybody wants to be out on the market first, and if a better sensor can get you out a year sooner, you’ll pay it.

Low end

LIDARs that cost (or will cost) below $1,000, especially below $250, are the ones of interest to major automakers who still think the old way: Building cars for customers with a self-drive feature.

They don’t want to add a great deal to the bill of materials for their cars and are driving all the low end, typically solid state devices.

None

None of these LIDARS are available today in automotive quantities or quality levels. Thus you see companies like Tesla, who want to ship a car today, designing without LIDAR. Those who imagine LIDAR as expensive believe that lower cost methods, like computer vision with cameras, are the right choice. They are right in the very short run (because you can’t get a LIDAR) and in the very long run (when cost will become the main driving factor) but probably wrong in the time scales that matter.

Some LIDARs are CES

Here is a list of some of the other LIDAR companies I came across at CES. There were even more than this.

AEye — MEMS LIDAR and software fused with visible light camera

Cepton — MEMS-like steering, claims 200 to 300m range

Robosense RS-lidar-m1 pro — claims 200m MEMS, 20fps, .09 deb by .2 deg, 63 x 20

Surestar R-Fans (16 and 32 laser, puck style) to 20hz,

Leishen MX series — short range for robots, 8 to 48 line units, (puck style)

ETRI LaserEye — Korean research product

Benewake Flash Lidar, shorter range.
.
Infineon box LIDAR (Innoluce prototype)

Innoviz, mems mirror 900nm LIDAR with claimed longer range

Leddartech — older company from Quebec, now making a flash LIDAR

Ouster — $12,000 super light LIDAR.

To come: Thermal Sensors, Radars, and Computer Vision, better cameras

Page 47 of 49
1 45 46 47 48 49