Page 400 of 427
1 398 399 400 401 402 427

Sewing a mechanical future

SoftWear Automation’s Sewbot. Credit: SoftWear Automation

The Financial Times reported earlier this year that one of the largest clothing manufacturers, Hong Kong-based Crystal Group, proclaimed robotics could not compete with the cost and quality of manual labor. Crystal’s Chief Executive, Andrew Lo, emphatically declared, “The handling of soft materials is really hard for robots.” Lo did leave the door open for future consideration by acknowledging such budding technologies as “interesting.”

One company mentioned by Lo was Georgia Tech spinout, Softwear Automation. Softwear made news last summer by announcing its contract with an Arkansas apparel factory to update 21 production lines with its Sewbot automated sewing machines. The factory is owned by Chinese manufacturer Tianyuan Garments, which produces over 20 million T-shirts a year for Adidas. The Chairman of Tianyuan, Tang Xinhong, boasted about his new investment, saying “Around the world, even the cheapest labor market can’t compete with us,” when Sewbot brings down the costs to $0.33 a shirt.

The challenge for automating cut & sew operations to date has been the handling of textiles which come in a seemingly infinite number of varieties that stretch, skew, flop and move with great fluidity. To solve this problem, Softwear uses computer vision to track each individual thread. According to its issued patents, Softwear developed a specialized camera which captures  threads at 1,000 frames per second and tracks their movements using proprietary algorithms. Softwear embedded this camera around robot end effectors that manipulate the fabrics similar to human fingers. According to a description on IEEE Spectrum these “micromanipulators, powered by precise linear actuators, can guide a piece of cloth through a sewing machine with submillimeter precision, correcting for distortions of the material.” To further ensure the highest level of quality, Sewbot uses a four-axis robotic arm with a vacuum gripper that picks and places the textiles on a sewing table with a 360-degree conveyor system and spherical rollers to quickly move the fabric panels around.

Softwear’s CEO, Palaniswamy “Raj” Rajan, explained, “Our vision is that we should be able to manufacture clothing anywhere in the world and not rely on cheap labor and outsourcing.” Rajan appears to be working hard towards that goal, professing that his robots are already capable of reproducing more than 2 million products sold at Target and Walmart. According to IEEE Spectrum, Rajan further asserted in a press release that at the end of 2017, Sewbot will be on track to produce “30 million pieces a year.” It is unclear if that objective was ever met. Softwear did announce the closing of its $7.5 million financing round by CTW Venture Partners, a firm of which Rajan is also the managing partner.

Softwear Automation is not the only company focused on automating the trillion-dollar apparel industry. Sewbo has been turning heads with its innovative approach to fabric manipulation. Unlike Softwear, which is taking the more arduous route of revolutionizing machines, Sewbo turns textiles into hardened substances that are easy for off-the-shelf robots and existing sewing appliances to handle. Sewbo’s secret sauce, literally, is a water-soluble thermal plastic or stiffening solution that turns cloth into a cardboard-like material. Blown away by its creativity and simplicity, I sat down with its inventor Jonathan Zornow last week to learn more about the future of fashion automation.

After researching the best polymers to laminate safely onto fabrics using a patent-pending technique, Zornow explained that last year he was able to unveil the “world’s first and only robotically-sewn garment.” Since then, Zornow has been contacted by almost every apparel manufacturer (excluding Crystal) to explore automating their production lines. Zornow has hit a nerve with the industry, especially in Asia, that finds itself in the labor management business with monthly attrition rates of 10% and huge drop offs after Chinese New Year. Zornow shared that “many factory owners were in fact annoyed that they couldn’t buy the product today.”

Zornow believes that automation technologies could initially be a boom for bringing small-batch production back to the USA, prompting an industry of “mass customization” closer to the consumer. As reported in 2015, apparel brands have been moving manufacturing back from China with 3D-printing technologies for shoes and knit fabrics. Long-term, Zornow said, “I think that automation will be an important tool for the burgeoning reshoring movement by helping domestic factories compete with offshore factories’ lower labor costs. When automation becomes a competitive alternative, a big part of its appeal will be how many headaches it relieves for the industry.”

To date, fulfilling the promise of “Made In America” has proven difficult, as Zornow explained we have forgotten how to make things here. In a recent report by the American Apparel & Footwear Association, US apparel manufacturing fell from 50% in 1994 to roughly 3% in 2015, meaning 97% of clothing today is imported. For example, Zornow shared with me how his competitor was born. In 2002, the US Congress passed the Berry Amendment requiring the armed services to make uniforms domestically, which led DARPA to grant $1.75 million to a Georgia Tech team to build a prototype of an automated sewing machine. As Rajan explains, “The Berry Amendment went into effect restricting the military from procuring clothing that was not made in the USA. Complying with the rule proved challenging due to a lack of skilled labour available in the US that only got worse as the current generation of seamstresses retired with no new talent to take their place. It was under these circumstances that the initial idea for Softwear was born and the company was launched in 2012.”

I first met Zornow at RoboBusiness last September when he demonstrated for a packed crowd how Sewbo is able to efficiently recreate the 10-20 steps to sew a T-shirt. However, producing a typical mens dress shirt can require up to 80 different steps. Zornow pragmatically explains the road ahead, “It will be a very long time, if ever, before things are 100% automated.” He points to examples of current automation in fabric production, such as dyeing, cutting and finishing which augment manual labor. Following this trend “they’re able to leverage machines to achieve incredible productivity, to the point where the labor cost to manufacture a yard of fabric is usually de minimis.” Zornow foresees a future where his technology is just another step in the production line as forward-thinking factories are planning two decades ahead, recognizing that in order “to stay competitive they need new technologies” like Sewbo.

Best practices in designing effective roadmaps for robotics innovation

In the past decade, countries and regions around the globe have developed strategic roadmaps to guide investment and development of robotic technology. Roadmaps from the US, South Korea, Japan and EU have been in place for some years and have had time to mature and evolve. Meanwhile roadmaps from other countries such as Australia and Singapore are just now being developed and launched. How did these strategic initiatives come to be? What do they hope to achieve? Have they been successful, and how do you measure success?

To explore these issues, former Robohub Editor Hallie Siegel and Open Roboethics Institute (ORi) Co-founder and Director AJung Moon invited researchers, policymakers, and industry members who have played a significant role in launching and shaping major strategic robotics initiatives in their regions to participate in an IROS 2017 workshop “Best practices in designing effective roadmaps for robotics innovation” to see what is working, what is not, and to uncover what best practices in roadmap development (if any) might be broadly applied to other regions.

Specifically, the workshop sought to examine the process of how these policy frameworks came to be created in the first place, how they have been tailored to local capabilities and strengths, and what performance indicators are being used to measure their success — so that participants could draw from international collective experience as they design and evaluate strategic robotics initiatives for their own regions.
The highlight of the workshop was a pair of panel discussions moderated by robotics ecosystem expert Andra Keay, Managing Director of the robotics industry cluster association Silicon Valley Robotics.
The first panel — featuring Peter Corke (QUT, Australia), Kyung-Hoon Kim (Ministry of Trade, Industry & Energy (MOTIE), South Korea), Rainer Bischoff (euRobotics AISBL, EU), Dario Floreano (NCCR Robotics, EPFL, Switzerland), Sue Keay (Queensland University of Technology, Australia), and Raj Madhavan (Humanitarian Robotics Technologies,LLC, USA) — was an in-depth discussion of regional differences in roadmap motivation, leadership, and strategy design from the perspectives of government, industry and academia.
The second panel — featuring Raja Chatila (IEEE Global Initiative for Ethical Considerations
in Artificial Intelligence & Autonomous Systems), AJung Moon (Open Roboethics Institute),
Alex Shikany (Robotic Industries Association), and Sabine Hauert (Robohub.org) — covered the ways in which issues such as roboethics, public perception and the fear of job loss due to automation are influencing robotics policy in different regions around the globe.

Lenka Pitonakova was part of the audience and provides her impressions below.

State-of-the-art in different countries

The workshop started with presentations on the process of designing roadmaps for robotics research and development. Individual speakers discussed best practises and problems encountered in their respective countries.

Dr. Rainer Bischoff (euRobotics, EU)

EU flag
Robotics research is in a very mature stage in the EU. A network of robotics researchers and companies closely cooperate on issues such as using robots in healthcare, logistics, as well as for maintenance and inspection of infrastructure. The partnership between the European Commission, and European industry and academia is called SPARC. Decisions about relevant research areas to fund in the European Commission’s H2020 programme are made in a bottom-up fashion – meaning stakeholders get to influence priority areas for funding. This is done through topic groups in robotics established throughout the EU, which help shape the Robotics Multi-annual Roadmap which in turn shapes the work programme of the European Commission. Public outreach is also very important in the EU. Not only are all funding decisions openly available to the public, researchers are encouraged to perform outreach activities.

Dr. Dario Floreano (NCCR Robotics and EPFL, Switzerland)

Switzerland flag
One main source of funding in Switzerland currently comes from the NCCR scheme, which is a 12-year programme that started in 2010 and has four target areas: research, technology transfer (especially when it comes to creating start-ups), education and outreach. A part of this programme is also structural change in the research institutions of Switzerland. Since 2010, new robotics centres have been created and many new professors have been appointed. Switzerland takes a very pro-active approach to applied research, and technology transfer is as important for them as research itself. The most important areas of interest include wearable robots, mobile robots for search and rescue, human-robot interaction and teleoperation, as well as educational robots that can teach computational thinking.

Sue Keay (Australian Centre for Robotic Vision, Australia)

Australia flag
Australia is currently behind the Western countries when it comes to automation. This is mostly because there is no overreaching body that could unify different research groups and provide substantial research funding. A plan towards creating a centralised institution to support research is currently being formed and efforts are underway to persuade the government about the importance of investing into robotics. To this end, the ICRA 2018 conference, which will take place in Brisbane, Australia, is an important event for the Australian robotics community. Among the focus areas for future research, mining, manufacturing, defence and healthcare have been identified as the most important.

Dr. Kyung-Hoon Kim (Ministry of Trade, Industry & Energy, South Korea)
South Korea flag

The government of South Korea has controlled the robotics research focus via the Intelligent Robots Development and Promotion Act since 2008. Every five years, the roadmap for research is re-visited by government experts and new funding areas are identified. The areas of interest include machine learning and big data analysis, human-robot collaboration, development of hardware parts and software platforms, as well as application of robotics in smart factories.

Dr. Raj Madhavan (Humanitarian Robotics Technologies, USA and India)

India flag

Researchers in India currently do not get much support from the government when it comes to robotics research, although some work on a research roadmap for robotics started in early 2017. However, the lack of government support is not the only problem that India faces. There also seems to be a lack of interest and commitment from individual researchers to unite their efforts and collaborate on a national level.

Research roadmaps and funding

In a panel discussion that followed the presentations, the following stakeholders in research and innovation were identified:

  • Governments: Provide investment money and shape regulations
  • Academia: Provides the foresight into new technologies
  • Industry: Creates the need for research and applies it.

A crucial factor that influences the interest from government and industry was idenfied: The ability of researchers to provide estimates about the economic impact of their work. Especially in the EU, robotics started growing as a research field when industry became more involved in funding and roadmap creation.

Secondly, it was mentioned that engaging the government and the public is also very important. Because of the ability of government regulations to stop technology from being developed and used, politicians should be engaged with new ideas and with how they will shape the society. On the other hand, end-users, i.e., the public, also need to understand the impact of new technology on their lives, both to encourage the use of new technology and to mitigate fears of its negative impacts. However, engaging all stakeholders and making research and development relevant to all of them is often very difficult because of differences in opinions and long-term goals.

Challenges for adoption of robotics technologies


The second panel discussion focused on challenges for adoption of robotic technologies.

Public acceptance and uncertainty about the impact of technology on well-being of people and society as a whole, as well as the fear of loosing control of autonomous systems, were identified as the most important topics to address. To mitigate these fears, it is useful to provide the public with statistics on how technology impacted jobs in the recent past, as well as to provide well-informed projections for the near future.

It is crucial for scientists to be trained in and to apply best practice in Public communication, especially as the commentary on new technology often comes from the media, where non-experts (consciously or not) misinform the public about the impacts and capabilities of new technology.

CES 2018: Robots, AI, massive data and prodigious plans

This year’s CES was a great show for robots. “From the latest in self-driving vehicles, smart cities, AI, sports tech, robotics, health and fitness tech and more, the innovation at CES 2018 will further global business and spur new jobs and new markets around the world,” said Gary Shapiro, president and CEO, CTA.

But with that breath of coverage and an estimated 200,000 visitors, 7,000 media, 3,000 exhibitors, 900 startups, 2.75 million sq ft of floorspace at two convention centers, hospitality suites in almost every major hotel on the Las Vegas Strip, over 20,000 product announcements and 900 speakers in 200 conference sessions, comes massive traffic (humans, cars, taxis and buses), power outages, product launch snafus and humor:

AI, big data, Amazon, Google, Alibaba and Baidu

“It’s the year of A.I. and conversational interfaces,” said J. P. Gownder, an analyst for Forrester Research, “particularly advancing those interfaces from basic conversations to relationships.” Voice control of almost everything from robots to refrigerators was de rigueur. The growing amount of artificial intelligence software and the race between Amazon, Google and their Chinese counterparts Alibaba and Baidu to be the go-to service for integration was on full display. Signs advertised that products worked with Google Assistant or Amazon’s Alexa or both, or with Duer-OS (Baidu’s conversational operating system) but, by the sheer number of products that worked with the Alexa voice assistant, Amazon appeared to dominate.

“It’s the year when data is no long static and post processed” said Brian Krzanich, Intel’s CEO.  He also said: “The rise of autonomous cars will be the most ambitious data project of our lifetime.” In his keynote presentation he demonstrated the massive volume of data involved in the real-time processing needs of autonomous cars, sports events, smart robotics, mapping data collection and a myriad other sources of data-driven technology on the horizon.

Many companies were promoting their graphics, gaming, and other types of processors. IBM had a large invite-only room to show their Quantum Computer. Their 50-qubit chip is housed in that silver canister at the bottom of the thing/computer. Not shown is the housing which keeps the device super cool. IBM is making the computer available via the cloud to 60,000 users working on 1.7 million experiments as well as commercial partners in finance, materials, automotive and chemistry. (Intel showed their 49-qubit chip code-named Tangle Lake in a segment in Krzanich’s keynote.)

Robots, robotics and startups

Robots were everywhere ranging from ag bots, tennis bots, drones, robot arms, robot prosthetics and robot wheelchairs, to smart home companions, security robots and air, land and sea drones. In this short promotional video produced by CES, one can see the range of products on display. Note the quantity of Japanese, Chinese and Korean manufacturers.

One of the remarkable features of CES is what they call Eureka Park. It’s a whole floor of over 900 startup booths with entrepreneurs eager to explain their wares and plans. The area was supported by the NSF, Techstars and a host of others. It’s a bit overwhelming but absolutely fascinating.

Because it’s such a spread-out show, locations tend to blur. But I visited all the robotics and related vendors I could find in my 28,000-step two-day exploration and the following are ones that stuck out from the pack:

  • LiDAR and camera vision systems providers for self-driving vehicles, robots, and automation such as Velodyne, Quanergy and Luminar were everywhere showing near and far detection ranges, wide, narrow and 360° fields of view, and solid state or other conventional as well as software to consolidate all that data and make it meaningful.
    • Innoviz Technologies, an Israeli startup that has already raised $82 million, showed their solid state device (Pro) available now and their low-cost automotive grade product (One) available in 2019.
    • Bosch-supported Chinese startup Roadstar.ai is developing Level 4 multi-sensor fusion solutions (cameras, LiDARs, radars, GPS and others).
    • Beijing Visum Technology, another Chinese vision system startup, but this one uses what they called ‘Natural Learning’ to continually improve what is seen by their ViEye, stereoscopic, real-time vision detection system for logistics and industrial automation.
    • Korean startup EyeDea displayed both a robot vision system and a smart vision module (camera and chip) for deep learning and obstacle avoidance for the auto industry.
    • Occipital, a Colorado startup developing depth sensing tech using twin infrared shutter cameras for indoor and outdoor for scanning and tracking.
    • Aeolus Robotics, a Chinese/American startup working on a $10,000 home robot with functional arms and hands and comms and interactivity that are similar to what IBM’s Watson offers, appeared to be focused toward selling their system components: object recognition, facial/pedestrian recognition, deep learning perception systems and auto safety cameras which interpret human expressions and actions such as fatigue.
  • SuitX, a Bay Area exoskeleton spin-off from Ekso Bionics, focused on providing modular therapeutic help for people with limited mobility rather than industrial uses of assistive devices. Ekso, which wasn’t at CES, is providing assistive systems that people strap into to make walking, lifting and stretching easier for employees and the military.
  • There were a few marine robots for photography, research and hull inspection:
    • Sublue Underwater AI, a Chinese startup, had a tank with an intelligent ROV capable of diving down to 240′ while sending back full HD camera and sensor data. They also make water tows.
    • RoboSea, also a Chinese startup, was showing land and sea drones for entertainment, research and rescue and photography.
    • QYSea, a Chinese startup making a 4k HD underwater camera robot which can go to a depth of 325′.
    • CCROV, a brand of the Vxfly incubater of China’s Northwestern Polytechnical University, demonstrated a 10-pound tethered camera box with thrusters that can dive more than 300′. The compact device is designed for narrow underwater areas and dangerous environments for people.
    • All were well-designed and packaged as consumer products.
  • UBTech, a Chinese startup that is doing quite well making small humanoid toy robots including their $300 StarWars StormTrooper showed that their line of robots are growing from toys to service robots. They were demonstrating the first video-enabled humanoid robot with an Amazon Alexa communication system touting its surveillance capabilities and Avatar modes for this little (17″ tall) walking robot. It not only works with Alexa but can also get apps and skills and accept controls from iOS and Android. Still, like many of the other home robots, they can’t grasp objects, consequently they can’t perform services beyond remote presence and Alexa-like skills.
  • My Special Aflac Duck won the CES Best Unexpected Product Award for a social robot designed to look like the white Aflac duck but also designed to help children coping with cancer. This is the second healthcare-related robotic device for kids from Sproutel, the maker of the duck. Their first product was Jerry the Bear for diabetic kids.
  • YYD Robo, another Chinese startup, was demonstrating a line of family companion, medical care robots (although the robot’s hands could not grasp), which also served as child care and teaching robots. This Shenzhen-based robot company says they are fully-staffed and have a million sq ft of manufacturing space, yet the robots weren’t working and their website doesn’t come up.
  • Hease Robotics, a French mobile kiosk startup, showed their Heasy robotic kiosk as a guide for retail stores, office facilities and public areas. At CES, there were many vending machine companies showing how they are transitioning to smart machines – and in some cases, smart and robotic as the Heasy robotic kiosk.
  • Haapie SAS, also a French startup, was showing their tiny interactive, social and cognitive robots – consumer entertainment and Alexa-like capabilities. Haapie also integrates their voice recognition, speech synthesis and content management into smartphone clients.
  • LG, the Korean consumer products conglomerate, showed an ambitious line of robot products they called CLOi. One is an industrial floor cleaner for public spaces which can also serve as a kiosk/guide; another has a build-in tray for food and drink delivery in hotels; and a third can carry luggage and will follow customers around stores, airports and hotels. During a press conference, one of the robots tipped over and another wouldn’t hear instructions. Nevertheless all three were well-designed and purposed and the floor cleaner and porter robots are going to help out at the airport for next month’s Winter Olympics.
  • Two Chinese companies showed follow-me luggage: 90FUN and their Puppy suitcase and ForwardX and their smart suitcases. 90FUN is using Ninebot/Segway follow-me technology for their Puppy.
  • Twinswheel, a French startup, was showing a prototype of their parcel delivery land drone for factories, offices and last-mile deliveries.
  • Dobot is a Chinese inventor/developer of a transformable robot and 3D printer for educational purposes and a multi-functional industrial robot arm. They also make a variety of vision and stabilizing systems which are incorporated into their printers and robots. It’s very clever science. They even have a laser add-on for laser cutting and/or engraving. Dobot had a very successful Kickstarter campaign in 2017.
  • Evolver Robots is a Chinese developer of mobile domestic service robots specifically designed for children between the ages of 4-12 offering child education, video chat, games, mobile projection and remote control via smartphone.
  • Although drones had their own separate area at the show, there were many locations where I found them. From the agricultural spraying drones by Yamaha, DJI and Taiwanese Geosat Aerospace to the little deck-of-cards-sized ElanSelfie or the fold-into-a-5″ X 1/2″ high AEE Aviation selfie drone from Shenzhen, nothing stood out amongst the other 40+ drone vendors to compete with the might of DJI (which had two good-sized booths in different locations).
  • Segway (remember Dean Kamen?) is now fully owned by Ninebot, a Beijing provider of all types of robotic-assisted self-balancing scooters and devices. They are now focused on lifestyle and recreational riders in the consumer market including the Loomo which you ride like a hoverboard and then load it up with cargo and have it follow you home or to another place in the facility. At their booth they were pushing the Loomo for logistics operations however it can’t carry much and has balance problems. They would do better having it tow a cart.
  • Yujin Robot, a large Korean maker of robotic vacuums, educational robots, industrial robots, mobility platforms for research and a variety of consumer products, was showing their new logistics transport system GoCart with three different configurations of autonomous point-to-point robots.
  • The Buddy robot from Blue Frog Robotics won CES’s Robotics and Drones Innovation Award along with Soft Robotics and their grippers and control system that can pick items of varying size, shape and weight with a single device. Jibo, the Dobot (mentioned above) and 14 others also received the Robotics and Drones Innovation Award.

[The old adage in robotics that for every service robot there is a highly-skilled engineer by its side is still true… most of the social and home robots frequently didn’t work at the show and were either idle or being repaired.]

Silly things

    • A local strip club promoted their robotic pole dancers (free limo).
    • At a hospitality suite across from the convention center, the head of Harmony, the sex robot by San Diego area Abyss Creations (RealDoll), was available for demos and interviews. It will begin shipping this quarter at $8-$10,000.
    • Crowd-drawing events were everywhere but this one drew the largest audiences: Omron’s ping-pong playing robot.
    • FoldiMate, a laundry folding robot, requires a human to feed the robot one article at a time for it to work (for just $980 in late 2019). Who’s the robot?
    • And Intel’s drones flew over the Bellagio hotel fountains in sync with the water and light musical show. Very cool.
    • ABC’s Shark Tank, the hit business-themed funding TV show, was searching for entrepreneurs with interesting products at an open call audition area.

Bottom Line

Each time I return from a CES (I’ve been to at least six) I swear I’ll never go again. It’s exhausting as well as overwhelming. It’s impossible to get to all the places one needs to go — and it’s cold. Plus, once I get there, the products are often so new and untested, that they fail or are over-presented with too much hype. (LG’s new CLOi products failed repeatedly at their press conference; Sony’s Aibo ignored commands at the Sony press event.)

I end up asking myself “Is this technology really ready for prime time? Will it be ready by their promised delivery dates? Or was it all just a hope-fest? A search for funding?” I still have no answer… perhaps all are true; perhaps that’s why I keep going. It’s as if my mind sifted through all the hype and chaff and ended up with what’s important. There’s no doubt this show was great for robotics and that Asian (particularly Chinese) vendors are the new power players. Maybe that’s why I copied down the dates for CES 2018.

The GM/Cruise robocar interior is refreshingly spartan

GM revealed photos of what they say is the production form of their self-driving car based on the Chevy Bolt and Cruise software. They say it will be released next year, making it almost surely the first release from a major car company if they make it.

As reported in their press release and other sources their goals are ambitious. While Waymo is the clear leader, it has deployed in Phoenix, because that is probably the easiest big city for driving in the USA. Cruise/GM claims they have been working on the harder problem of a dense city like San Francisco.

What’s notable, though, about the Cruise picture is what’s not in it, namely much in the way of dashboard or controls. There is a small screen, and a few controls but little else. Likewise the Waymo 3rd generation “Firefly” car had almost no controls at all.

The car has climate controls and window controls and little else. Of course a touchscreen can control a lot of other things, especially when the “driver” does not have to watch the road.

Combine this with the concept Smart-car self-driving interior from Daimler below, and you see a change in the thinking of the car industry towards the thinking of the high-tech industry.

At most car conferences today, a large fraction of what you see is related to putting new things inside the car — fancier infotainment tools and other aspects of the “connected car.” These are the same companies who charge you $2,000 to put a navigation system into your car that you will turn off in 2 years because your phone does a better job.

It is not surprising that Google expects you to get your music, entertainment and connectivity from the device in your pocket — they make Android. It is a bigger step for GM and Daimler to realize this, and bodes well for them.

While the car is not finalized and the software certainly isn’t, GM feels it has settled on the hardware design of its version 1 car, and is going into production on it. I suspect they will make changes and tweaks as new sensors come down the pipeline, but traditionally car companies have always locked down the main elements of the hardware design a couple of years before a vehicle is available for sale.

One thing both of these cars need more of, which I know well from road tripping, is surfaces and places to put stuff. Cupholders and door pockets aren’t enough when a car is a work or relaxation station rather than something to drive.

What is not clear is if they have been bold enough to get rid of many of the other features not needed if you’re not driving, like fancy adjustable seats. The side-view mirrors are gone but sensors are in their place (it is widely anticipated that they will allow even human driven cars to replace the mirrors with cameras, since that’s better and lower drag.) Waymo’s firefly had the mirrors because the law still demands them.

GM is also already working with NHTSA to get this car an exception to the Federal Motor Vehicle Safety Standards which require the things like the steering wheel that they took out. The feds say they will work quickly on this so it seems likely. Several states are already preparing the legal regime necessary, and GM suggests it will deploy something in 2019.

Not too long ago, I would have ranked GM very far down on the list of carmakers likely to succeed in the robocar world. After they acquired Cruise they moved up the chart, but frankly I had been skeptical about how much a small startup, no matter how highly valued, could change a giant automaker. It now seems that intuition was wrong.

Engineers design artificial synapse for “brain-on-a-chip” hardware

From left: MIT researchers Scott H. Tan, Jeehwan Kim, and Shinhyun Choi
Image: Kuan Qiao

By Jennifer Chu

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses — the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

The design, published today in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow. 

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

#252: Embedded Platform for SLAM, with Zhe Zhang



In this episode Abate talks with Zhe Zhang from Perceptin where they are building embedded platforms for robots to do Simultaneous Localization and Mapping (SLAM) algorithms in real time. Zhe explains the methods they incorporate such as sensor fusion and hardware synchronization to make a highly accurate SLAM platform for IOT, consumer, and automotive grade robots.

Zhe Zhang

Zhe is the co-founder and CEO of PerceptIn. Prior to founding PerceptIn, he worked at Magic Leap in Silicon Valley and prior to that he worked for five years at Microsoft. Zhang has a PhD in Robotics from the State University of New York and an undergraduate degree from Tsinghua University.

 

 

 

Further research recommended by Zhe:

  1. Probabilistic Data Association for Semantic SLAM
  2. A Comparative Analysis of Tightly-coupled Monocular, Binocular, and Stereo VINS
  3. Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art

Links

Page 400 of 427
1 398 399 400 401 402 427