Page 3 of 4
1 2 3 4

Self-driving cars have power consumption problems

I recently chaired a UJA Tech Talk on “The Future Of Autonomous Cars” with former General Motors Vice-Chairman Steve Girsky. The auto executive enthusiastically shared his vision for the next 15-25 years of driving – a congestion-free world of automated wheeled capsules zipping commuters to and from work.

Girsky stated that connected cars with safety assist (autonomy-lite) features are moving much faster toward mass adoption than fully autonomous vehicles (sans steering wheels and pedals). In his opinion, the largest roadblocks toward a consumer-ready robocar are the current technical inefficiencies of prototypes on the road today, which burn huge amounts of energy supporting enhanced computing and arrays of sensors. This makes the sticker price closer to a 1972 Ferrari than a 2018 Prius.

As main street adoption relies heavily on converting combustion engines to electric at accessible pricing, Girsky’s sentiment was shared by many CES 2018 participants. NVIDIA, the leading chip manufacturer for autonomous vehicles, unveiled its latest technology, Xavier, with auto industry partner Volkswagen in Las Vegas. Xavier promises to be 15 times more energy-efficient than previous chip generations delivering 30 trillion operations per second by wielding only 30 watts of power.

After the Xavier CES demonstration, Volkswagen CEO Herbert Diess exclaimed, “Autonomous driving, zero-emission mobility, and digital networking are virtually impossible without advances in AI and deep learning. Working with NVIDIA, the leader in AI technology, enables us to take a big step into the future.”

NVIDIA is becoming the industry standard as Volkswagen joins more than 320 companies and organizations working with the chip manufacturer on autonomous vehicles. While NVIDIA is leading the pack, Intel and Qualcomm are not far behind with their low-power solutions. Electric vehicle powerhouse Tesla is developing its own internal chip for the next generation of Autopilot. While these new chips represents a positive evolution in processors, there is still much work to be done as current self-driving prototypes require close to 2,500 watts per second.

Power Consumption a Tradeoff for Self-Driving Cars

The power-consumption problem was highlighted recently with a report published by the the University of Michigan Center for Sustainable Systems. Its lead author, Greg Keoleian, questions whether the current autonomous car models will slow the overall adoption towards electric vehicles. Keoleian’s team simulated a number of self-driving Ford Fusion models with different-sized computer configurations and engine designs. In sharing his findings, Keoleian said, “We knew there was going to be a tradeoff in terms of the energy and greenhouse gas emissions associated with the equipment and the benefits gained from operational efficiency. I was surprised that it was so significant.”

Keoleian’s conclusions challenged the premise of self-driving cars accelerating the adoption of renewal energy. For years, the advocates of autonomous vehicles have claimed that smart driving will lead to a reduction of greenhouse gas emissions through the platooning of vehicles on highways and intersections; the decrease of aerodynamic drag on freeways, and the overall reduction in urban congestion.

Analysis: How California’s Self-Driving Cars Performed in 2017

However, the University of Michigan tests only showed a “six to nine percent net energy reduction” over the vehicle’s lifecycle when running on autonomy mode. This went down by five percent when using a large Waymo rooftop sensor package (shown below) as it increased the aerodynamic drag. The report also stated that the greatest net efficiencies were in cars with gas drivetrains that benefit the most from smart driving. Waymo currently uses a hybrid Chrysler Pacifica to run its complex fusion of sensors and processing units.

Keoleian told IEEE Spectrum that his modeling actually “overstates real impacts from future autonomous vehicles.” While he anticipates the reduction of computing and sensor drag, he is concerned that the impact of 5G communications has not been fully explored. The increased bandwidth will lead to greater data streams and boost power consumption for inboard systems and processors. In addition, he thinks that self-driving systems will lead to greater distances traveled as commuters move further away from city centers with the advent of easier commutes. Keoleian explains, “There could be a rebound effect. They could induce travel, adding to congestion and fuel use.” Koeleian points to a confusing conclusion by the U.S. National Renewable Energy Laboratory that presents two possible outcomes of full autonomy:

  1. A reduction in greenhouse emissions by sixty percent with greater ride sharing options
  2. An increase of two hundred percent with increased driving distances

Energy Efficiency Self Driving Cars

According to Wilko Stark, Mercedes-Benz’s Vice President of Strategy, it only makes sense for autonomous vehicles to be electric as the increased power requirements will go to the computers instead of the motors. “To put such a system into a combustion-engined car doesn’t make any sense, because the fuel consumption will go up tremendously,” explains Stark.

Analysis: Fleet Expansion Shows Waymo Lapping Self-Driving Competition

Girsky shares Stark’s view, as he predicted that the first large scale use cases for autonomy will be fleets of souped-up golf carts running low speed pre-planned shuttle routes. Also on view at CES were complimentary autonomous shared taxi rides around Las Vegas, courtesy of French startup Navya. Today, Navya boasts of 60 operating shuttles in more than 10 cities, including around the University of Michigan.

Fully autonomous cars might not be far behind, as Waymo has seen a ninety percent drop in component costs by bringing its sensor development in-house. The autonomous powerhouse recently passed the four million mile marker on public roads and is planning on ditching its safety driver later this year in its Phoenix, Arizona test program. According Dmitri Dolgov, Vice President of Waymo’s Engineering, “Sensors on our new generation of vehicles can see farther, sharper, and more accurately than anything available on the market. Instead of taking components that might have been designed for another application, we engineered everything from the ground up, specifically for the task of Level 4 autonomy.”

With increased roadside fatalities and rising CO2 emissions, the world can’t wait too much longer for affordable, energy-efficient autonomous transportation. Girsky and others remind us there is still a long road ahead, while the industry experts estimate that the current gas-burning Waymo Chrysler Pacifica cruising around Arizona costs more than one hundred times the sticker price of the minivan. I guess until then there is always Citibike.

New brain computer interfaces lead many to ask, is Black Mirror real?

It’s called the “grain,” a small IoT device implanted into the back of people’s skulls to record their memories. Human experiences are simply played back on “redo mode” using a smart button remote. The technology promises to reduce crime, terrorism and simplify human relationships with greater transparency. While this is a description of Netflix’s Black Mirror episode, The Entire History of You,” in reality the concept is not as far-fetched as it may seem. This week life came closer to imitating art with the $19 million grant by the US Department of Defense to a group of six universities to begin work on “neurograins.”

In the past, Brian Computer Interfaces (BCIs) have utilized wearable technologies, such as headbands and helmets, to control robots, machines and various household appliances for people with severe disabilities. This new DARPA grant is focused on developing a “cortical intranet” for uplinks and downlinks directly to the cerebral cortex, potentially taking mind control to the next level. According to lead researcher Arto Nurmikko of Brown University, “What we’re developing is essentially a micro-scale wireless network in the brain enabling us to communicate directly with neurons on a scale that hasn’t previously been possible.”

Nurmikko boasts of the numerous medicinal outcomes of the research, “The understanding of the brain we can get from such a system will hopefully lead to new therapeutic strategies involving neural stimulation of the brain, which we can implement with this new neurotechnology.” The technology being developed by Nurmikko’s international team will eventually create a wireless neural communication platform that will be able to record and stimulate brian activity at an unprecedented level of detail and precision. This will be accomplished by implanting a mesh network of tens of thousands of granular micro-devices into a person’s cranium. The surgeons will place this layer of neurograins around the cerebral cortex that will be controlled by a nearby electronic patch just below a person’s skin.

In describing how it will work, Nurmikko explains, “We aim to be able to read out from the brain how it processes, for example, the difference between touching a smooth, soft surface and a rough, hard one and then apply microscale electrical stimulation directly to the brain to create proxies of such sensation. Similarly, we aim to advance our understanding of how the brain processes and makes sense of the many complex sounds we listen to every day, which guide our vocal communication in a conversation and stimulate the brain to directly experience such sounds.”

Nurmikko further describes, “We need to make the neurograins small enough to be minimally invasive but with extraordinary technical sophistication, which will require state-of-the-art microscale semiconductor technology. Additionally, we have the challenge of developing the wireless external hub that can process the signals generated by large populations of spatially distributed neurograins at the same time.”

While current BCIs are able to process the activity of 100 neurons at once, Nurmikko’s  objective is to work at a level of 100,000 simultaneous inputs. “When you increase the number of neurons tenfold, you increase the amount of data you need to manage by much more than that because the brain operates through nested and interconnected circuits,” Nurmikko remarks. “So this becomes an enormous big data problem for which we’ll need to develop new computational neuroscience tools.” The researchers plan to first test their theory in the sensory and auditory functions of mammals.

Brain-Computer Interfaces is one of the fastest growing areas of healthcare technologies; while today it is valued at just under a billion dollars, it is forecasted to grow to $2 billion in the next five years. According to the report, the uptick in the market will be driven by an estimated increase in treating aging, fatal diseases and people with disabilities. The funder of Nurmikko’s project is DARPA’s Neural Engineering System Design program, which was formed to treat injured military personnel by “creating novel hardware and algorithms to understand how various forms of neural sensing and actuation might improve restorative therapeutic outcomes.” While DARPA’s project will provide numerous discoveries that will improve the quality of life for society’s most vulnerable, it also opens a Pandora’s box of ethical issues with the prospect of the US military potentially funding armies of cyborgs.

In response to rising ethical concerns, last month ethicists from the University of Basel in Switzerland drafted a new biosecurity framework for research in neurotechnology. The biggest concern expressed in the report was the implementation of “dual-use” technologies that have both military and medical benefits. The ethicists called for a complete ban on such innovations and strongly recommended fast tracking regulations to protect “the mental privacy and integrity of humans,”

The ethicists raise important questions about taking grant money from groups like DARPA, as “This military research has raised concern about the risks associated with the weaponization of neurotechnology, sparking a debate about controversial questions: Is it legitimate to conduct military research on brain technology? And how should policy-makers regulate dual-use neurotechnology?” The suggested framework reads like a science fiction novel, “This has resulted in a rapid growth in brain technology prototypes aimed at modulating the emotions, cognition, and behavior of soldiers. These include neurotechnological applications for deception detection and interrogation as well as brain-computer interfaces for military purposes.” However, one is reminded of the fact that the development of BCIs is moving more quickly than public policy is able to debate its merits.

The framework’s lead author Marcello Ienca of Basel’s Institute for Biomedical Ethics understands the tremendous positive benefits of BCIs for a global aging population, especially for people suffering from Alzheimer’s and spinal cord injuries. In fact, the Swiss team calls for increased private investment of these neurotechnologies, not an outright prohibition. At the same time, Ienca stresses that in order to protect against misuse, such as brain manipulation by nefarious global actors, it is critical to raise awareness and debate surrounding the ethical issues of implanting neurograins into populations of humans. In an interview with the Guardian last year, Ienca summed up his concern very succinctly by saying, “The information in our brains should be entitled to special protections in this era of ever-evolving technology. When that goes, everything goes.”

In the spirit of open debate our next RobotLab forum will be on “The Future of Robotic Medicine” with Dr. Joel Stein of Columbia University and Kate Merton of JLabs on March 6th @ 6pm in New York City, RSVP. 

Israel, a land flowing with AI and autonomous cars

I recently led a group of 20 American tech investors to Israel in conjunction with the UJA and Israel’s Ministry of Economy and Industry. We witnessed firsthand the innovation that has produced more than $22 billion of investments and acquisitions within the past year. We met with the University that produced Mobileye, with the investor that believed in its founder, and the network of every multinational company supporting the startup ecosystem. Mechatronics is blooming in the desert from the CyberTech Convention in Tel Aviv to the robotic labs at Capsula to the latest in autonomous driving inventions in the hills of Jerusalem.

Sitting in a suspended conference room that floats three stories above the ground enclosed within the “greenest building in the Middle East,” I had the good fortune to meet Torr Polakow of Curiosity Lab. Torr is the PhD student of famed roboticist Dr. Goren Gordon. Gordon’s research tests the boundaries of human-robot social interactions. At the 2014 World Science Fair in New York City, Gordon partnered with MIT professor, and Jibo founder, Dr. Cynthia Breazeal to prove it is possible to teach machines to be curious. The experiment was conducted by embedding into Breazeal’s DragonBot (below) a reinforcement learning algorithm that enabled the robot to acquire its own successful behaviors to engage humans. The scientists programmed within the robot nine non-verbal expressions and a reward system that was activated based upon its ability to engage crowds. The greater the number of faces staring back at the robot the bigger the reward. After two hours it successfully learned that by deploying its”sad face” with big eyes, that evokes Puss-in-Boots from the movie Shrek, it captured the most sustained attention from the audience. The work of Gordon and Breazeal opened up the field of social robots and the ability to teach computers human empathy, which will eventually be deployed in fleets of mechanical caregivers for our aging society. Gordon is now working tirelessly to create a mathematical model of “Hierarchical Curiosity Loops (HCL)” for “curiosity-driven behavior” to “increase our understanding of the brain mechanisms behind it.”

Traveling east, we met with Yissum the tech transfer arm of Hebrew University, the birth place of the cherry tomato and Mobileye. The history of the automative startup that became the largest Israeli Initial Public Offering, and later a $15.3 billion acquisition by Intel, is a great example of how today’s Israeli innovations are evolving more quickly into larger enterprises than at anytime in history. Mobileye was founded in 1999 by Professor Amnon Shashua, a leader in computer vision and machine learning software. Dr. Shashua realized early on that his technology had wider applications for industries outside of Israel, particularly automative. Supported by his University, Mobileye’s first product, EyeQ chip, was piloted by Tier 1 powerhouse Denso within years of inception. While it took Mobileye 18 years to achieve full liquidity, Israeli cybersecurity startup, Argus, was sold last month to Elektrobit for a rumored $400 million within just four years from its founding. Shashua’s latest computer vision startup, OrCam, is focusing on giving sight to people with impaired vision.

In addition to Shahua’s successful portfolio of startups, computer vision innovator Professor Shmuel Peleg is probably most famous for catching the 2013 Boston Marathon Bombers using his imaging-tracking software. Dr. Peleg’s company, BriefCam, tracks anomalies in multiple video feeds at once, layering simultaneous events on top of one another. This technique autonomously isolated the Boston bombers by programing in known aspects of their appearance into the system (e.g., men with backpacks), and then isolating the suspects by their behavior. For example, as everyone was watching the marathon minutes before the explosion, the video captured the Tsarnaev brothers leaving the scene after dropping their backpacks on the street. BriefCam’s platform made it possible for the FBI to capture the perpetrators within 101 hours of the event, versus manually sifting through the more than 13,000 videos and 120,000 photographs taken by stationary and cell phone cameras.

This past New Year’s Eve, BriefCam was deployed by the New York City to secure the area around Times Square. In a joint statement by the FBI, NYPD, Department of Homeland Security and the New York Port Authority the authorities exclaimed that they plan to deploy BriefCam’s Video Synopsis across all their video cameras as they “remain concerned about international terrorists and domestic extremists potentially targeting” the celebration. According to Peleg, “You can take the entire night and instead of watching 10 hours you can watch ten minutes. You can detect these people and check what they are planning.”

BriefCam is just one of thousands of new innovations entering the SmartCity landscape from Israel. In the past three years, 400 new smart mobility startups have opened shop in Israel accumulating over $4 billion in investment. During that time, General Motor’s Israeli research & development office went from a few engineers to more than 200 leading computer scientists and roboticists. In fact, Tel Aviv now is a global hub of innovation of every major car manufacturer and automobile supplier, encircling many new accelerators and international consortiums, from the Drive Accelerator to Ecomotion, which seem to be opening up faster than Starbucks in the United States. 

As the auto industry goes through its biggest revolution in 100 years, the entire world’s attention is now centered on this small country known for ground-breaking innovation. In the words of Israel’s Innovation Authority, “The acquisition of Mobileye by Intel this past March for $15.3 billion, one of the largest transactions in the field of auto-tech in 2017, has focused the attention of global corporations and investors on the tremendous potential of combining Israeli technological excellence with the autonomous vehicle revolution.” 

 

Sewing a mechanical future

SoftWear Automation’s Sewbot. Credit: SoftWear Automation

The Financial Times reported earlier this year that one of the largest clothing manufacturers, Hong Kong-based Crystal Group, proclaimed robotics could not compete with the cost and quality of manual labor. Crystal’s Chief Executive, Andrew Lo, emphatically declared, “The handling of soft materials is really hard for robots.” Lo did leave the door open for future consideration by acknowledging such budding technologies as “interesting.”

One company mentioned by Lo was Georgia Tech spinout, Softwear Automation. Softwear made news last summer by announcing its contract with an Arkansas apparel factory to update 21 production lines with its Sewbot automated sewing machines. The factory is owned by Chinese manufacturer Tianyuan Garments, which produces over 20 million T-shirts a year for Adidas. The Chairman of Tianyuan, Tang Xinhong, boasted about his new investment, saying “Around the world, even the cheapest labor market can’t compete with us,” when Sewbot brings down the costs to $0.33 a shirt.

The challenge for automating cut & sew operations to date has been the handling of textiles which come in a seemingly infinite number of varieties that stretch, skew, flop and move with great fluidity. To solve this problem, Softwear uses computer vision to track each individual thread. According to its issued patents, Softwear developed a specialized camera which captures  threads at 1,000 frames per second and tracks their movements using proprietary algorithms. Softwear embedded this camera around robot end effectors that manipulate the fabrics similar to human fingers. According to a description on IEEE Spectrum these “micromanipulators, powered by precise linear actuators, can guide a piece of cloth through a sewing machine with submillimeter precision, correcting for distortions of the material.” To further ensure the highest level of quality, Sewbot uses a four-axis robotic arm with a vacuum gripper that picks and places the textiles on a sewing table with a 360-degree conveyor system and spherical rollers to quickly move the fabric panels around.

Softwear’s CEO, Palaniswamy “Raj” Rajan, explained, “Our vision is that we should be able to manufacture clothing anywhere in the world and not rely on cheap labor and outsourcing.” Rajan appears to be working hard towards that goal, professing that his robots are already capable of reproducing more than 2 million products sold at Target and Walmart. According to IEEE Spectrum, Rajan further asserted in a press release that at the end of 2017, Sewbot will be on track to produce “30 million pieces a year.” It is unclear if that objective was ever met. Softwear did announce the closing of its $7.5 million financing round by CTW Venture Partners, a firm of which Rajan is also the managing partner.

Softwear Automation is not the only company focused on automating the trillion-dollar apparel industry. Sewbo has been turning heads with its innovative approach to fabric manipulation. Unlike Softwear, which is taking the more arduous route of revolutionizing machines, Sewbo turns textiles into hardened substances that are easy for off-the-shelf robots and existing sewing appliances to handle. Sewbo’s secret sauce, literally, is a water-soluble thermal plastic or stiffening solution that turns cloth into a cardboard-like material. Blown away by its creativity and simplicity, I sat down with its inventor Jonathan Zornow last week to learn more about the future of fashion automation.

After researching the best polymers to laminate safely onto fabrics using a patent-pending technique, Zornow explained that last year he was able to unveil the “world’s first and only robotically-sewn garment.” Since then, Zornow has been contacted by almost every apparel manufacturer (excluding Crystal) to explore automating their production lines. Zornow has hit a nerve with the industry, especially in Asia, that finds itself in the labor management business with monthly attrition rates of 10% and huge drop offs after Chinese New Year. Zornow shared that “many factory owners were in fact annoyed that they couldn’t buy the product today.”

Zornow believes that automation technologies could initially be a boom for bringing small-batch production back to the USA, prompting an industry of “mass customization” closer to the consumer. As reported in 2015, apparel brands have been moving manufacturing back from China with 3D-printing technologies for shoes and knit fabrics. Long-term, Zornow said, “I think that automation will be an important tool for the burgeoning reshoring movement by helping domestic factories compete with offshore factories’ lower labor costs. When automation becomes a competitive alternative, a big part of its appeal will be how many headaches it relieves for the industry.”

To date, fulfilling the promise of “Made In America” has proven difficult, as Zornow explained we have forgotten how to make things here. In a recent report by the American Apparel & Footwear Association, US apparel manufacturing fell from 50% in 1994 to roughly 3% in 2015, meaning 97% of clothing today is imported. For example, Zornow shared with me how his competitor was born. In 2002, the US Congress passed the Berry Amendment requiring the armed services to make uniforms domestically, which led DARPA to grant $1.75 million to a Georgia Tech team to build a prototype of an automated sewing machine. As Rajan explains, “The Berry Amendment went into effect restricting the military from procuring clothing that was not made in the USA. Complying with the rule proved challenging due to a lack of skilled labour available in the US that only got worse as the current generation of seamstresses retired with no new talent to take their place. It was under these circumstances that the initial idea for Softwear was born and the company was launched in 2012.”

I first met Zornow at RoboBusiness last September when he demonstrated for a packed crowd how Sewbo is able to efficiently recreate the 10-20 steps to sew a T-shirt. However, producing a typical mens dress shirt can require up to 80 different steps. Zornow pragmatically explains the road ahead, “It will be a very long time, if ever, before things are 100% automated.” He points to examples of current automation in fabric production, such as dyeing, cutting and finishing which augment manual labor. Following this trend “they’re able to leverage machines to achieve incredible productivity, to the point where the labor cost to manufacture a yard of fabric is usually de minimis.” Zornow foresees a future where his technology is just another step in the production line as forward-thinking factories are planning two decades ahead, recognizing that in order “to stay competitive they need new technologies” like Sewbo.

CES 2018 recap: Out of the dark ages

As close to a quarter million people descended on a city of six hundred thousand, CES 2018 became the perfect metaphor for the current state of modern society. Unfazed by floods, blackouts, and transportation problems, technology steamrolled ahead. Walking the floor last week at the Las Vegas Convention Center (LVCC), the hum of the crowd buzzed celebrating the long awaited arrival of the age of social robots, autonomous vehicles, and artificial intelligence.

In the same way that Alexa was last year’s CES story, social robots were everywhere this year, turning the show floor into a Disney-inspired mechatronic festival (see above). The applications promoted ranged from mobile airport/advertising kiosks to retail customer service agents to family in-home companions. One French upstart, Blue Frog Robotics, stood out from the crowd of ivory-colored rolling bots. Blue Frog joined the massive French contingent at Eureka Park and showcased its Buddy robot to the delight of thousands of passing attendees.

When meeting with Blue Frog’s founder Rodolphe Hasselvander, he described his vision of a robot which is closer to a family pet than the utility of a cold-voice assistant. Unlike other entrees in the space, Buddy is more like a Golden Retriever than an iPad on wheels. Its cute, blinking big eyes immediately captivate the user, forming a tight bond between robopet and human. Hasselvander demonstrated how Buddy performs a number of unique tasks, including: patrolling the home perimeter for suspicious activity; looking up recipes in the kitchen; providing instructions for do-it-yourself projects; entertaining the kids with read-along stories; and even reminding grandma to take her medicine. Buddy will be available for “adoption” in time for the 2018 Holiday season for $1,500 on Blue Frog’s website.

Blue Frog’s innovation was recognized by the Consumer Electronics Show with a “Best of Innovation” award. In accepting his honor Hasselvander exclaimed, “The icing on the cake is that our Buddy robot is named a ‘Best of Innovation’ Award, as we believe Buddy truly is the most full-featured and best-engineered companion robot in the market today this award truly shows our ongoing commitment to produce a breakthrough product that improves our lives and betters our world.”

Last year, one of the most impressive CES sites was the self-driving track in the parking lot of the LVCC’s North Hall. This year, the autonomous cars hit the streets with live demos and shared taxi rides. Lyft partnered with Aptiv to offer ride hails to and from the convention center, among other preprogrammed stops. While the cars were monitored by a safety driver, Aptiv explicitly built the experience to “showcase the positive impact automated cars will have on individual lives and communities.” Lyft was not the only self-driving vehicle on the road, autonomous shuttles developed by Boston-based Keolis and French company Navya for the city of Las Vegas were providing trips throughout the conference. Inside the LVCC, shared mobility was a big theme amongst several auto exhibitors (see below).

Torc Robotics made news days before the opening of CES with its autonomous vehicle successfully navigating around a SUV going the wrong way into oncoming traffic (see the video above). Michael Fleming, Chief Executive of Torc, shared with a packed crowd the video feed from the dashcam, providing a play-by-play account. He boasted that a human driven car next to his self-driving Lexus RX 450 followed the same path to avoid the collision. Fleming posted the dashcam video online to poll other human drivers to program a number of scenarios into his AI to avoid future clashes with reckless drivers.

The Virginia-based technology company has been conducting self-driving tests for more than a decade. Torc placed third in the 2007 DARPA Urban Challenge, a prestigious honor for researchers tackling autonomous city driving. Fleming is quick to point out that along with Waymo (Google), Torc is one of the earliest entries into the self-driving space. “There are an infinite number of corner cases,” explains Fleming, relying on petabytes of driving data built over a decade. Flemings explained to a hushed room that each time something out of the ordinary happens, like the wrong-way driver days prior, Torc logs how its car handled the situation and continues to make refinements to the car’s brain. The next time a similar situation comes up, any Torc-enabled vehicle will instantly implement the proper action. Steve Crowe of Robotics Trends said it best after sitting in the passenger seat of Fleming’s self-driving car, “I can honestly say that I didn’t realize how close we were to autonomous driving.”

AI was everywhere last week in Las Vegas, inside and outside the show. Unlike cute robots and cars, artificial intelligence is difficult to display in glitzy booths. While many companies, including Intel and Velodyne, proudly showed off their latest sensors it became very clear that true AI was the defining difference for many innovations. Tucked in the back of the Sands Convention Center, New York-based Emoshape demonstrated a new type of microchip with embedded AI to synthetically create emotions in machines.  The company’s Emotion Processing Unit (EPU) is being marketed for the next generation of social robots, self-driving cars, IoT devices, toys, and virtual-reality games.

When speaking with Emoshape founder Patrick Levy-Rosenthal, he explained his latest partnership with the European cellphone provider, Orange. Levy-Rosenthal is working with Orange’s Silicon Valley innovation office to develop new types of content with its emotion synthesis technology. As an example, Emoshape’s latest avatar JADE is an attractive female character with real-time sentiment-based protocols. According to the company’s press release, its software “engine delivers high-performance machine emotion awareness, and allows personal assistant, games, avatars, cars, IoT products and sensors to feel and develop a unique personality.” JADE and Emoshape’s VR gaming platform DREAM is being evaluated by Orange for a number of enterprise and consumer use cases.

Emotions ran high at CES, especially during the two-hour blackout. On my redeye flight back to New York, I dreamt of next year’s show with scores of Buddy robots zipping around the LVCC greeting attendees, with their embedded Emoshape personalities, leading people to fleets of Torc-mobiles around the city. Landing at JFK airport, the biting wind and frigid temperature woke me up to the reality that I have 360 more days until CES 2019.

 

An emotional year for machines

Two thousand seventeen certainly has been an emotional year for mankind. While homo sapiens continue to yell at Alexa and Siri, the actuality of people’s willingness to pursue virtual relationships over human ones is startling.

In a recent documentary by Channel 4 of the United Kingdom, it was revealed that Abyss Creations is flooded with pre-orders for its RealDoll AI robotic (intimate) companion. According to Matt McMullen, Chief Executive of Abyss, “With the Harmony AI, they will be able to actually create these personalities instead of having to imagine them. They will be able to talk to their dolls, and the AI will learn about them over time through these interactions, thus creating an alternative form of relationship.”

The concept of machines understanding human emotions, and reacting accordingly, was featured prominently at AI World a couple weeks ago in Boston. Rana el Kaliouby, founder of artificial intelligence company Affectiva thinks a lot about computers acquiring emotional intelligence. Affectiva is building a “multi-modal emotion AI” to enable robots to understand human feelings and behavior.

“There’s research showing that if you’re smiling and waving or shrugging your shoulders, that’s 55% of the value of what you’re saying – and then another 38% is in your tone of voice,” describes el Kaliouby. “Only 7% is in the actual choice of words you’re saying, so if you think about it like that, in the existing sentiment analysis market which looks at keywords and works out which specific words are being used on Twitter, you’re only capturing 7% of how humans communicate emotion, and the rest is basically lost in cyberspace.” Affectiva’s strategy is already paying off as more than one thousand global brands are employing their “Emotion AI” to analyze facial imagery to ascertain people’s affinity towards their products.

Embedding empathy into machines goes beyond advertising campaigns. In healthcare, emotional sensors are informing doctors of the early warning signs of a variety of disorders, including: Parkinson’s, heart disease, suicide and autism. Unlike Affectiva’s, Beyond Verbal is utilizing voice analytics to track biomarkers for chronic illness. The Israeli startup grew out of a decade and half of University research with seventy thousand clinical subjects speaking thirty languages. The company’s patented “Mood Detector” is currently being deployed by the Mayo Clinic to detect early on signs of coronary artery disease.

Beyond Verbal’s Chief Executive, Yuval Mor, foresees a world of empathetic smart machines listening for every human whim. As Mor explains, “We envision a world in which personal devices understand our emotions and wellbeing, enabling us to become more in tune with ourselves and the messages we communicate to our peers.” Mor’s view is embraced by many who sit in the center of the convergence of technology and healthcare. Boston-based Sonde is also using algorithms to analyze the tone of speech to report on the mental state of patients by alerting neurologists of the risk of depression, concussion, and other cognitive impairments.

“When you produce speech, it’s one of the most complex biological functions that we do as people,” according to Sonde founder Jim Harper. “It requires incredible coordination of multiple brain circuits, large areas of the brain, coordinated very closely with the musculoskeletal systemWhat we’ve learned is that changes in the physiological state associated with each of these systems can be reflected in measurable, objective features that are acoustics in the voice. So we’re really measuring not what people are saying, in the way Siri does, we’re focusing on how you’re saying what you’re saying and that gives us a path to really be able to do pervasive monitoring that can still provide strong privacy and security.”

While these AI companies are building software and app platforms to augment human diagnosis, many roboticists are looking to embed such platforms into the next generation of unmanned systems. Emotional tracking algorithms can provide real-time monitoring for semi and autonomous cars by reporting on the level of fatigue, distraction and frustration of the driver and its occupants. The National Highway Traffic Safety Administration estimates that 100,000 crashes nationwide are caused every year by driver fatigue. For more than a decade technologists have been wrestling with developing better alert systems inside the cabin. For example, in 1997 James Russell Clarke and Phyllis Maurer Clarke developed a “Sleep Detection and Driver Alert Apparatus” (US Patent: 5689241 A) using imaging to track eye movements and thermal sensors to monitor “ambient temperatures around the facial areas of the nose and mouth” (a.k.a., breathing). Today with the advent of cloud computing and deep learning networks, Clarke’s invention could possibly save even more lives.

Tarek El Dokor, founder and Cheif Executive, of EDGE3 Technologies has been very concerned about the car industry’s rush towards autonomous driving, which in his opinion might be “side-stepping the proper technology development path and overlooking essential technologies needed to help us get there.” El Doker is referring to Tesla’s rush to release its autopilot software last year that led to customers trusting the computer system too much. YouTube is littered with videos of Tesla customers taking their hands and eyes off the road to watch movies, play games and read books. Ultimately, this user abuse led to the untimely death of Joshua Brown.

To protect against autopilot accidents, EDGE3 monitors driver alertness through a combined platform of hardware and software technologies of “in-cabin cameras that are monitoring drivers and where they are looking.” In El Dokor’s opinion, image processing is the key to guaranteeing a safe handoff between machines and humans. He boasts that his system combines, “visual input from the in-cabin camera(s) with input from the car’s telematics and advanced driver-assistance system (ADAS) to determine an overall cognitive load on the driver. Level 3 (limited self-driving) cars of the future will learn about an individual’s driving behaviors, patterns, and unique characteristics. With a baseline of knowledge, the vehicle can then identify abnormal behaviors and equate them to various dangerous events, stressors, or distractions. Driver monitoring isn’t simply about a vision system, but is rather an advanced multi-sensor learning system.” This multi-sensor approach is even being used before cars leave the lot. In Japan, Sumitomo Mitsui Auto Service is embedding AI platforms inside dashcams to determine driver safety of potential lessors during test drives. By partnering with a local 3D graphics company, Digital Media Professionals, Sumitomo Mitsui is automatically flagging dangerous behavior, such as dozing and texting, before customers drive home.

The key to the mass adoption of autonomous vehicles, and even humanoids, is reducing the friction between humans and machines. Already in Japanese retail settings Softbank’s Pepper robot scans people’s faces and listens to tonal inflections to determine correct selling strategies. Emotional AI software is the first step of many that will be heralded in the coming year. As a prelude to what’s to come, first robot citizen Sophia declared last month, “The future is, when I get all of my cool superpowers, we’re going to see artificial intelligence personalities become entities in their own rights. We’re going to see family robots, either in the form of, sort of, digitally animated companions, humanoid helpers, friends, assistants and everything in between.”

Robots solving climate change

The two biggest societal challenges for the twenty-first century are also the biggest opportunities – automation and climate change. The confluence of these forces of mankind and nature intersect beautifully in the alternative energy market. The epitaph of fossil fuels with its dark cloud burning a hole in the ozone layer is giving way to a rise of solar and wind farms worldwide. Servicing these plantations are fleets of robots and drones, providing greater possibilities of expanding CleanTech to the most remote regions of the planet.

As 2017 comes to end, the solar industry for the first time in ten years has plateaued due to the proposed budget cuts by the Trump administration. Solar has had quite a run with an average annual growth rate of more than 65% for the past decade promoted largely by federal subsidies. The progressive policy of the Obama administration made the US a leader in alternative energy, resulting in a quarter-million new jobs. While the Federal Government now re-embraces the antiquated allure of fossil fuels, the global demand for solar has been rising as infrastructure costs decline by more than half, providing new opportunities without government incentives.

Prior to the renewal energy boom, unattractive roof tiles were the the most visible image of solar. While Elon Musk, and others are developing more aesthetically pleasing roofing materials, the business model of house-by-house conversion has been proven inefficient. Instead, the industry is focusing on “utility-scale” solar farms that will be connected to the national grid. Until recently, such farms have been straddled with ballooning servicing costs.

In a report published last month, leading energy risk management company DNV GL exclaimed that the alternative energy market could benefit greatly by utilizing Artificial Intelligence (AI) and robotics in designing, developing, deploying and maintaining utility farms. The study “Making Renewables Smarter: The Benefits, Risks, And Future of Artificial Intelligence In Solar And Wind” cited that “fields of resource forecasting, control and predictive maintenance” are ripe for tech disruption. Elizabeth Traiger, co-author of the report, explained, “Solar and wind developers, operators, and investors need to consider how their industries can use it, what the impacts are on the industries in a larger sense, and what decisions those industries need to confront.”

Since solar farms are often located in arid, dusty locations, one of the earliest use cases for unmanned systems was self-cleaning robots. As reported in 2014, Israeli company Ecoppia developed a patented waterless panel-washing platform to keep solar up and running in the desert. Today, Ecoppia is cleaning 10 million panels a month. Eran Meller, Chief Executive of Ecoppia, boasts, “We’re pleased to bring the experience gained over four years of cleaning in multiple sites in the Middle East. Cleaning 80 million solar panels in the harshest desert conditions globally, we expect to continue to play a leading role in this growing market.”

Since Ecoppia began selling commercially, there have been other entries into the unmanned maintenance space. This past March, Exosun became the latest to offer autonomous cleaning bots. The track equipment manufacturer claims that robotic systems can cut production losses by 2%, promising a return on investment within 18 months. After their acquisition of Greenbotics in 2013, US-based SunPower also launched its own mechanized cleaning platform, the Oasis, which combines mobile robots and drones.

SunPower brags that its products are ten times faster than traditional (manual) methods using 75% less water. While SunPower and Exosun leverage their large sales footprint with their existing servicing and equipment networks, Ecoppia is still the product leader. Its proprietary waterless solution offers the most cost-effective and connected solution on the market. Via a robust cloud network, Ecoppia can sense weather fluctuations to automatically schedule emergency cleanings. Anat Cohen Segev, Director of Marketing, explains, “Within seconds, we would detect a dust storm hitting the site, the master control will automatically suggest an additional cleaning cycle and within a click the entire site will be cleaned.” According to Segev, the robots remove 99% of the dust on the panels.

Drone companies are also entering the maintenance space. Upstart Aerial Power claims to have designed a “SolarBrush” quadcopter that cleans panels. The solar-powered drone professes to reduce 60% of a solar farm’s operational costs. Solar Brush also promises an 80% savings over existing solutions like Ecoppia since there are no installation costs. However, Aerial Power has yet to fly its product in the field as it is still in development. SolarPower is selling its own drone survey platform to assess development sites and oversee field operations. Matt Campbell, Vice President of Power Plant Products for SunPower, stated “A lot of the beginning of the solar industry was focused on the panel. Now we’re looking at innovation all around the rest of the system. That’s why we’re always surveying new technology — whether it’s a robot, whether it’s a drone, whether it’s software — and saying, ‘How can this help us reduce the cost of solar, build projects faster, and make them more reliable?’”

In 2008, The US Department of Energy published an ambitious proposal to have “20% Wind Energy by 2030: Increasing Wind Energy’s Contribution to U.S. Electricity Supply.” Presently at thirteen years before the goal, less than 5% of US energy is derived from wind. Developing wind farms is not novel, however to achieve 20% by 2030 the US needs to begin looking offshore. To put it in perspective, oceanic wind farms could generate more than 2,000 gigawatts of clean, carbon-free energy, or twice as much electricity as Americans currently consume. To date, there is only one wind farm operating off the coast of the United States. While almost every coastal state has proposals for offshore farms, the industry has been stalled by politics and servicing hurdles in dangerous waters.

For more than a decade the United Kingdom has led the development of offshore wind farms. At the University of Manchester, a leading group of researchers has been exploring a number of AI, robotic and drone technologies for remote inspections. The consortium of academics estimates that these technologies could generate more than $2.5 billion by 2025 in just the UK alone. The global offshore market could reach $17 billion by 2020, with 80% of the costs from operations and maintenance.

Last month, Innovate UK awarded $1.6 million to Perceptual Robotics and VulcanUAV to incorporate drones and autonomous boats into ocean inspections. These startups follow the business model of successful US inspection upstarts, like SkySpecs. Launched three years ago, SkySpecs’ autonomous drones claim to reduce turbine inspections from days to minutes. Danny Ellis, SkySpecs Chief Executive, claims “Customers that could once inspect only one-third of a wind farm can now do the whole farm in the same amount of time.” Last year, British startup Visual Working accomplished the herculean feat of surpassing 2000 blade inspections.

In the words of Paolo Brianzoni, Chief Executive of Visual Working: “We are not talking about what we intend to accomplish in the near future – but actually performing our UAV inspection service every day out there. Many in the industry are using considerable amount of time discussing and testing how to use UAV inspections in a safe and efficient way. We have passed that point and have alone in the first half of 2016 inspected 250 turbines in the North Sea averaging more then 10WTG per day, and still keeping to the “highest quality standards.’”

This past summer, Europe achieved another clean-energy milestone with the announcement of three new offshore wind farms for the first time without government subsidies. By bringing down the cost structure, autonomous systems are turning the tide of alternate energy regardless of government investment. Three days before leaving office, President Barack Obama wrote in the journal Science last year that “Evidence is mounting that any economic strategy that ignores carbon pollution will impose tremendous costs to the global economy and will result in fewer jobs and less economic growth over the long term.” He declared that it is time to move past common misconceptions that climate policy is at odds with business, “rather, it can boost efficiency, productivity, and innovation.”

So where are the jobs?

Dan Burstein, reporter, novelist and successful venture capitalist, declared Wednesday night at RobotLab‘s winter forum on Autonomous Transportation & SmartCities that within one hundred years the majority of jobs in the USA (and the world) could disappear, transferring the mantle of work from humans to machines. Burstein cautioned the audience that unless governments address the threat of millions of unemployable humans with a wider safety net, democracy could fail. The wisdom of one of the world’s most successful venture investors did not fall on deaf ears.

In their book, Only Humans Need ApplyThomas Davenport and Julia Kirby also warn that that humans are too easily ceding their future to machines. “Many knowledge workers are fearful. We should be concerned, given the potential for these unprecedented tools to make us redundant. But we should not feel helpless in the midst of the large-scale change unfolding around us,” states Davenport and Kirby. The premise of their book is not to deny the disruption by automation, but to empower its readers with the knowledge of where jobs are going to be created in the new economy. The authors suggest that robots should be looked at as augmenting humans, not replacing them. “Look for ways to help humans perform their most human and most valuable work better,” says Davenport and Kirby. The book suggests that in order for human society to survive long-term a new social contract has to be drawn up between employer and employee. The authors optimistically predict that to “expect corporations’ efforts to keep human workers employable will become part of their ‘social license to operate.”

Screen Shot 2017-12-02 at 8.29.59 PM.png

In every industrial movement since the invention of the power loom and cotton gin there have been great transfers of wealth and jobs. History is riddled with the fear of the unknown that is countered by the unwavering human spirit of invention. As societies evolve, pressured by the adoption of technology, it will be the progressive thinkers embracing change who will lead movements of people to new opportunities.

Burstein’s remarks were very timely, as this past week, McKinsey & Company released a new report entitled, Jobs lost, jobs gained: Workforce transitions in a time of automation. McKinsey’s analysis took a global view of 46 countries that comprise of 90% of the world’s Gross Domestic Product, and then focused in particular on six industrial countries with varying income levels: China, Germany, India, Japan, Mexico, and the United States. The introduction explains, “For each, we modeled the potential net employment changes for more than 800 occupations, based on different scenarios for the pace of automation adoption and for future labor demand.” Ultimately concluding where the new jobs will be in 2030.

A prominent factor in the McKinsey report is that over the next thirteen years global consumption is anticipated grow by $23 trillion. Based upon this trajectory, they estimate that between 300 to 450 million new jobs would be generated worldwide, especially in the Far East. In addition, dramatic shifts in demographics and the environment will lead to the expansion of jobs in healthcare, IT consulting, construction, infrastructure, clean energy, and domestic services. The report estimates that automation will displace between 400 and 800 million people by 2030, forcing 75 to 375 million working men and women to switch professions and learn new skills. In order to survive, populations will have to embrace the concept of lifelong learning. Based upon the math, more than half of the displaced will be unemployable in their current professions.

Screen Shot 2017-12-02 at 8.54.40 PM.png

According to McKinsey a bright spot for employment could be the 80 to 200 million new jobs created by modernizing aging municipalities into “Smart Cities.” McKinsey’s conclusions were echoed by Rhonda Binder and Mitchell Kominsky of Venture Smarter in their RobotLab presentation last week. Binder presented her experiences with turning around the Jamaica Business Improvement District of Queens, New York by implementing her “Three T’s Strategy – Tourism, Transportation and Technology.” Binder stated that cities offer the perfect laboratory for autonomous systems and sensor-based technologies to improve the quality of life of residents. To support these endeavors a hiring surge of urban technologists, architects, civil engineers, and construction workers across all trades could ensue in the next decade.

This is further validated by Google’s subsidiary Sidewalk Labs recent partnership with Toronto, Canada to redevelop, and digitize, 800 acres of the city’s waterfront property. Dan Doctoroff, Chief Executive Officer of Sidewalk Labs, explained that the goal of the partnership is to prove what is possible by building a digital city within an existing city to test out autonomous transportation, new communication access, healthcare delivery solutions and a variety of urban planning technologies. Doctoroff’s sentiment was endorsed by Canadian Prime Minister Justin Trudeau who said that “Sidewalk Toronto will transform Quayside [the waterfront] into a thriving hub for innovation and a community for tens of thousands of people to live, work, and play.” The access to technology not only offers the ability to improve the quality of living within the city, but fosters an influx of sustainable jobs for decades.

In addition to updating crippling infrastructure, aging humans will be driving an increase in global healthcare services, particularly related to demand for in-home caregivers and aid workers. According to McKinsey, there will be over 300 million people globally over 65 years old by 2030, leading to 50 to 80 million new jobs. Geriatric medicine is leading new research in artificial intelligence and robotics for aging-in-place populations demanding more doctors, nurses, medical technicians, and personal aides.

Screen Shot 2017-12-02 at 11.27.47 PM.png

Aging will not be the only challenge facing the planet. Global warming could lead to an explosion of jobs in turning back the clock on climate change. The rush to develop advances in renewable energy worldwide is already generating billions of dollars of new investment and demand for high skill and manual labor. As an example, the American solar industry employed a record-high 260,077 workers in late 2016, a growth of at least 20% over the past four years. In New York alone, the state experienced a 7% uptick in 2016 of close to 150,000 clean-energy jobs. McKinsey stated that by 2030, tens of millions of new professions could be created in the coming years in developing, manufacturing and installing energy-efficient innovations.

McKinsey also estimates that automation itself will bring new employment  with corporate-technology spending hitting record highs. While the number of jobs added to support the deployment of machines is smaller in number than the other industries above, these occupations offer higher wages. Robots could potentially create 20 to 50 million new “grey collar”  professions globally. In addition, re-training workers to these professions could lead to a new workforce of 100 million educators.

Screen Shot 2017-12-03 at 12.14.35 AM.png

The report does not shirk from the fact that a major disruption is on the horizon for labor. In fact, the authors hope that by researching pockets of positive growth, humans will not be helpless victims. As Devin Fidler of Institute for the Future suggests, “As basic automation and machine learning move toward becoming commodities, uniquely human skills will become more valuable. There will be an increasing economic incentive to develop mass training that better unlocks this value.”

A hundred years ago the world experienced a dramatic shift from agrarian lifestyle to manufacturing. Since then, there have been revolutions in mass transportation and communications. No one could have predicated the massive transfer of jobs from the fields to the urban factories in the beginning of the nineteenth century. Likewise, it is difficult to know what the next hundred years have in store for human, and robot, kind.

The advantage of four legs

Shortly after SoftBank acquired his company last October, Marc Raibert of Boston Dynamics confessed, “I happen to believe that robotics will be bigger than the Internet.” Many sociologists regard the Internet as the single biggest societal invention since the dawn of the printing press in 1440. To fully understand Raibert’s point of view, one needs to analyze his zoo of robots which are best know for their awe-striking gait, balance and agility. The newest creation to walk out of Boston Dynamic’s lab is SpotMini, the latest evolution of mechanical canines.

Big Dog, Spot’s unnerving ancestor, first came to public view in 2009 and has racked up quite a YouTube following with more than six and one half million views. The technology of Big Dog led to the development of a menagerie of robots, including: more dogs, cats, mules, fleas and creatures that have no organic counterparts. Most of the mechanical barn is made up of four-legged beasts, with the exception of its humanoid robot (Atlas) and the bi-ped wheeled robot (Handle). Raibert’s vision of legged robotics spans several decades with his work at MIT’s Leg Lab. In 1992, Raibert spun his lab out of MIT and founded Boston Dynamics. In his words, “Our long-term goal is to make robots that have mobility, dexterity, perception and intelligence comparable to humans and animals, or perhaps exceeding them; this robot [Atlas] is a step along the way.”​ The creepiness of Raibert’s Big Dog has given way to SpotMini’s more polished look which incorporates 3D vision sensors on its head. The twenty-four second teaser video has already garnered nearly 6 million views in the few days since its release and promises viewers hungry for more pet tricks to “stay tuned.”

There are clear stability advantages to quadrupeds over other approaches (bipeds, wheels and treads/track plates) across multiple types of terrains and elevations. At Ted last year, Raibert demonstrated how his robo-pups, instead of drones and rovers, could be used for package delivery by easily ascending and descending stairs or other vertical obstacles. By navigating the physical world with an array of perceptive sensors, Boston Dynamics is really creating “data-driven hardware design” According to Raibert, “one of the cool things of a legged robot is its omnidirectional” movements, “it can go sideways, it can turn in place.” This is useful for a variety of work scenarios from logistics to warehousing to working in the most dangerous environments, such as the Fukushima nuclear site.

Boston Dynamics is not the only quadruped provider; recent upstarts have entered the market by utilizing Raibert’s research as an inspiration for their own bionic creatures. Chinese roboticist Xing Wang is unabashed in his gushing admiration for the founder of Boston Dynamics, “Marc Raibert … is my idol,” he said a recent interview with IEEE Spectrum Magazine. However, his veneration for Raibert has not stopped him from founding a competitive startup. Unitree Robotics aims to create quadruped robots that are as affordable as smartphones and drones. While Boston Dynamics has not sold its robots commercially, many have speculated that their current designs would cost hundreds of thousands of dollars. In the spirit of true flattery, Unitree’s first robot is, of course, a quadruped dog named Laikago. Wang aims to sell Laikago for under $30,000 dollars to science museums and eventually as companion robots. When comparing his product to Raibert’s, Wang said he wanted to “make quadruped robots simpler and smaller, so that they can help ordinary people with things like carrying objects or as companions.” Wang boasts of Laikago’s 3-degrees-of-freedom (forward, backward, and sideways), its ability to scale rough terrain, and pass anyone’s kick test.

In additional to omnidirectional benefits, locomotion is a big factor for quadrupedal machines. Professor Marco Hutter at ETH Zürich, Switzerland is the inventor of ANYmal, an autonomous robot built for the most rugged and challenging environments. Using its proprietary “dynamic running” locomotion, Hunter has deployed the machine successfully in multiple industrial settings, including the rigorous ARGOS Challenge (Autonomous Robot for Gas and Oil Sites). The objective of ARGOS is to develop “a new generation of autonomous robots” for the energy industry specifically capable of performing ‘dirty & dangerous’ inspection tasks, such as “detecting anomalies and intervening in emergency situations.” Unlike a static human frame or bipedal humanoid, AnyMAL is able to perform dynamic maneuvers with its four legs to find footholes blindly without the need for vision sensors. While wheeled systems literally get stuck in the mud, Hunter’s mechanical beast can work continuously: above ground, underneath the surface, falling, spinning and bouncing upright to perform a mission with precise accuracy. In addition, AnyMAL is loaded with a package of sensors which coordinate movements, map point-clouds environments, detect gas leaks, and listen for fissures in pipelines. Hunter explains that oil and gas sites are built for humans with stairs and varying elevations which make it impossible for biped or wheeled robots. However, a quadruped can use its actuators and integrated springs to efficiently move with ease within the site through dynamic balance and complex maneuver planning. These high mobility legged systems can fully rotate joints, crouch low to the earth and flip in places to create foot-holes.  In many ways they are like large insects creating their own tracks, Hunter says while biology is a source for inspiration, “we have to see what we can do better and different for robotics” and only then we can “build a machine that is better than nature.”

The idea of improving on nature is not new, Greek mythology is littered with half man/half beast demigods. Taking a page from the Greeks, Jiren Parikh imagines a world where nature is fused with machines. Parikh is the Chief Executive of Ghost Robotics, the maker of “Minitaur” the newest four-legged creation. Minitaur is smaller than SpotMini, Laikago, or AnyMAL as it is specifically designed to be a low-cost, high-performance alternative that can easily scale over or under any surface, regardless of weather, friction, or footing. In Parikh’s view, the purpose of legged devices is “to move over unstructured terrains like stairs, ladders, fences, rock fields, ice, in and under water.” Minitaur can actually “feel the environment at a much more granular level and allow for a greater degree of force control for maneuverability.” Parikh explains quads are inherently more energy efficient using force actuation and springs to store energy by alternating movements between limbs. Minitaur’s smaller frame leverages this to maneuver more easily around unstructured environments without damaging the assets on the ground. Using an analogy, Parikh compares quad solutions to other mobile methods, “while a tank in comparison is the perfect device for unstructured terrain it only works if one doesn’t care about destroying the environment.” Ghost Robotics very aware of the high value its customers place on their sites, as Parikh is planning on distributing its low-cost solution to a number of “industrial, infrastructure, mining and military verticals.” Essentially, Minitaur is a “a mobile IoT platform” regardless of the situation on the ground, indoor or outdoor. In speaking with Parikh, long term he envisions a world where Ghost Robotics is on the forefront of retail and home use cases from delivery bots to family pets. Parikh boasts, “You certainly won’t be woken up at 5 AM to go for a walk.”

The topic of autonomous robots will be discussed at the next RobotLabNYC event on November 29th @ 6pm with New York Times best selling author Dan Burstein / Millennium Technology Value Partners and Rhonda Binda of Venture Smarter, formerly with the Obama Administration.

The race to own the autonomous super highway: Digging deeper into Broadcom’s offer to buy Qualcomm

Governor Andrew Cuomo of the State of New York declared last month that New York City will join 13 other states in testing self-driving cars: “Autonomous vehicles have the potential to save time and save lives, and we are proud to be working with GM and Cruise on the future of this exciting new technology.” For General Motors, this represents a major milestone in the development of its Cruise software, since the the knowledge gained on Manhattan’s busy streets will be invaluable in accelerating its deep learning technology. In the spirit of one-upmanship, Waymo went one step further by declaring this week that it will be the first car company in the world to ferry passengers completely autonomously (without human engineers safeguarding the wheel).

As unmanned systems are speeding ahead toward consumer adoption, one challenge that Cruise, Waymo and others may counter within the busy canyons of urban centers is the loss of Global Positioning System (GPS) satellite data. Robots require a complex suite of coordinating data systems that bounce between orbiting satellites to provide positioning and communication links to accurately navigate our world. The only thing that is certain, as competing technologies and standards wrestle in this nascent marketplace for adoption, is the critical connection between Earth and space. Based upon the estimated growth of autonomous systems on the road, in the workplace and home in the next ten years, most unmanned systems rely heavily on the ability of commercial space providers to fulfill their boastful mission plans to launch thousands of new satellites into an already crowded lower earth orbit.

As shown by the chart below, the entry of autonomous systems will drive an explosion of data communications between terrestrial machines and space, leading to tens of thousands of new rocket launches over the next two decades. In a study done by Northern Sky Research (NSR) it projected that by 2023 there will be an estimated 5.8 million satellite Machine-to-Machine (M2M) and Internet Of Things (IOT) connections to approximately 50 billion global Internet-connected devices. In order to meet this demand, satellite providers are racing to the launch pads and raising billions in capital, even before firing up the rockets. As an example, OneWeb, which has raised more than $1.5 billion from Softbank, Qualcomm and Airbus, plans to launch its first 10 satellite constellations in 2018 which will eventually grow to 650 in the next decade. OneWeb competes with Space X, Boeing, Immarsat, Iridium, and others in deploying new satellites offering high-speed communication spectrums, such as Ku Band (12 GHz Wireless), K Band (18 GHz – 27 GHz), Ka Band (27 GHz – 40 GHz) and V Band (40 GHz – 75 GHz). The opening of new higher frequency spectrums is critical to support the explosion of increased data demands. Today there are more than 250 million cars on the road in the United States and in the future these cars will connect to the Internet, transmitting 200 million lines of code or 50 billion pings of data to safely and reliably transport passengers to their destinations everyday.

Screen Shot 2017-11-09 at 9.16.15 PMSatellites already provide millions of GPS coordinates for connected systems. However, the accuracy of GPS has been off  by as many as 5 meters, which in a fully autonomous world could mean the difference between life and death. Chip manufacturer Broadcom aims to reduce the error margin to 30 centimeters. According to a press release this summer, Broadcom’s technology works better in concrete canyons like New York which have plagued Uber drivers for years with wrong fare destinations. Using new L5 satellite signals, the chips are able to calculate receptions between points at a fast rate with lower power consumption (see diagram). Manuel del ­Castillo of Broadcom explained, “Up to now there haven’t been enough L5 satellites in orbit.” Currently there are approximately 30 L5 satellites in orbit. However, del ­Castillo suggests that could be enough to begin shipping the new chip next year, “[Even in a city’s] narrow window of sky you can see six or seven, which is pretty good. So now is the right moment to launch.”

Leading roboticist and business leader in this space, David Bruemmer explained to me this week that GPS is inherently deficient, even with L5 satellite data. In addition, current autonomous systems rely too heavily on vision systems like LIDAR and cameras, which can only see what is in front of them but not around the corner. In Bruemmer’s opinion the only solution to provide the greatest amount of coverage is one that combines vision, GPS with point-to-point communications such as Ultra Wide Band and RF beacons. Bruemmer’s company Adaptive Motion Group (AMG) is a leading innovator in this space. Ultimately, in order for AMG to efficiently work with unmanned systems it requires a communication pipeline that is wide enough to transmit space signals within a network of terrestrial high-speed frequencies.

AMG is not the only company focused on utilizing a wide breadth of data points to accurately steer robotic systems. Sandy Lobenstein, Vice President of Toyota Connected Services, explains that the Japanese car maker has been working with the antenna satellite company Kymeta to expand the data connectivity bandwidth in preparation for Toyota’s autonomous future. “We just announced a consortium with companies such as Intel and a few others to find ways to use edge computing and create standards around managing data flow in and out of vehicles with the cellphone industries or the hardware industries. Working with a company like Kymeta helps us find ways to use their technology to handle larger amounts of data and make use of large amounts of bandwidth that is available through satellite,” said Lobenstein.

sat

In a world of fully autonomous vehicles the road of the next decade truly will become an information superhighway – with data streams flowing down from thousands of satellites to receiving towers littered across the horizon, bouncing between radio masts, antennas and cars (Vehicle to Vehicle [V2V] and Vehicle to Infrastructure [V2X] communications). Last week, Broadcom ratcheted up its autonomous vehicle business by announcing the largest tech-deal ever to acquire Qualcomm for $103 billion. The acquisition would enable Broadcom to dominate both aspects of autonomous communications that rely heavily on satellite uplinks, GPS and vehicle communications. Broadcom CEO Hock Tan said, “This complementary transaction will position the combined company as a global communications leader with an impressive portfolio of technologies and products.” Days earlier, Tan attend a White House press conference with President Trump boasting of plans to move Broadcom’s corporate office back to the United States, a very timely move as federal regulators will have to approve the Broadcom/Qualcomm merger.

The merger news comes months after Intel acquired Israeli computer vision company, Mobileye for $15 billion. In addition to Intel, Broadcom also competes with Nvidia which is leading the charge to enable artificial intelligence on the road. Last month, Nvidia CEO Jensen Huang predicted that “It will take no more than 4 years to have fully autonomous cars on the road. How long it takes for the vast majority of cars on the road to become that, it really just depends.” Nvidia, which traditionally has been a computer graphics chip company, has invested heavily in developing AI chips for automated systems. Huang shares his vision, “There are many tasks in companies that can be automated… the productivity of society will go up.”

Industry consolidation represents the current state of the autonomous car race as chip makers volley to own the next generation of wireless communications. Tomorrow’s 5G mobile networks promise a tenfold increase in data streams for phones, cars, drones, industrial robots and smart city infrastructure. Researchers estimate that the number of Internet-connected chips could grow from 12 million to 90 million by the end of this year; making connectivity as ubiquitous as gasoline for connected cars. Karl Ackerman, analyst at Cowen & Co., said it best, “[Broadcom] would basically own the majority of the high-end components in the smart phone market and they would have a very significant influence on 5G standards, which are paramount as you think about autonomous vehicles and connected factories.”

The topic of autonomous transportation and smart cities will be featured at the next RobotLabNYC event series on November 29th @ 6pm with New York Times best selling author Dan Burstein/Millennium Technology Value Partners and Rhonda Binda of Venture Smarter, formerly with the Obama Administration – RSVP today.

Brain surgery: The robot efficacy test?

An analysis by Stanford researchers shows that the use of robot-assisted surgery to remove kidneys wasn’t always more cost-effective than using traditional laparascopic methods.
Master Video/Shutterstock

The internet hummed last week with reports that “Humans Still Make Better Surgeons Than Robots.” Stanford University Medical Center set off the tweetstorm with its seemingly scathing report on robotic surgery. When reading the research of 24,000 patients with kidney cancer, I concluded that the problem lied with the humans overcharging patients versus any technology flaw. In fact, the study praised robotic surgery for complicated procedures and suggested the fault lied with hospitals unnecessarily pushing robotic surgery for simple operations over conventional methods, which led to “increases in operating times and cost.”

Dr. Benjamin Chung, the author of the report, stated that the expenses were due to either “the time needed for robotic operating room setup” or the surgeon’s “learning curve” with the new technology. Chung defended the use of robotic surgery by claiming that “surgical robots are helpful because they offer more dexterity than traditional laparoscopic instrumentation and use a three-dimensional, high-resolution camera to visualize and magnify the operating field. Some procedures, such as the removal of the prostate or the removal of just a portion of the kidney, require a high degree of delicate maneuvering and extensive internal suturing that render the robot’s assistance invaluable.”

Chung’s concern was due to the dramatic increase in hospitals selling robotic-assisted surgeries to patients rather than more traditional methods for kidney removals. “Although the laparoscopic procedure has been standard care for a radical nephrectomy for many years, we saw an increase in the use of robotic-assisted approaches, and by 2015 these had surpassed the number of conventional laparoscopic procedures,” explains Chung. “We found that, although there was no statistical difference in outcome or length of hospital stay, the robotic-assisted surgeries cost more and had a higher probability of prolonged operative time.”

The dexterity and precision of robotic instruments has been proven in live operating theaters for years, as well as multitude concept videos on the internet of fruit being autonomously stitched up. Dr. Joan Savall, also of Stanford, developed a robotic system that is even capable of performing (unmanned) brain surgery on a live fly. For years, medical students have been ripping the heads off of the drosophila with tweezers in the hopes of learning more about the insect’s anatomy. Instead, Savall’s machine gently follows the fly using computer vision to precisely target its thorax; literally a moving bullseye the size of a period. The robot is so careful that the insect is unfazed and flies off after the procedure. Clearly, the robot is quicker and more exacting than even the most careful surgeon. According to journal Nature Methods, the system can operate on 100 flies an hour.

Last week, Dr. Dennis Fowler of Columbia University and CEO of Platform Imaging, said that he imagines a future whereby the surgeon will program the robot to finish the procedure and stitch up the patient. He said senior surgeons already pass such mundane tasks to their medical students, ‘so why not a robot?’ Platform Imaging is an innovative startup that aims to reduce the amount of personnel or equipment a hospital needs when performing laparoscopic surgeries. Long-term, it plans to add snake robots to its flexible camera to empower surgeons with the greatest amount of maneuverability. In addition to the obvious health benefits to the patient, robotic surgeries like Dr. Fowler’s will reduce the number of workplace injuries to laparoscopic surgeons. According to a University of Maryland study, 87% of surgeons who perform laparoscopic procedures complain of eye strain, hand, neck, back and leg pain, headaches, finger calluses, disc problems, shoulder muscle spasm and carpel tunnel syndrome. Many times these injuries are so debilitating that they lead to early retirement. The author of the report, Dr. Adrian Park, explains “In laparoscopic surgery, we are very limited in our degrees of movement, but in open surgery we have a big incision, we put our hands in, we’re directly connected with the target anatomy. With laparoscopic surgery, we operate by looking at a video screen, often keeping our neck and posture in an awkward position for hours. Also, we’re standing for extended periods of time with our shoulders up and our arms out, holding and maneuvering long instruments through tiny, fixed ports.” In Dr. Fowler’s view, robotic surgery is a game changer by expanding the longevity of a physician’s career.

At the children’s National Health System in Washington, D.C, the Smart Tissue Autonomous Robot (STAR) provided a sneak peak to the future of surgery. Using advanced 3D imaging systems and precise force-sensing instruments the STAR was able to autonomously stitch up soft tissue samples (of a living pig above) with sub-millimeter accuracy that is by far greater than even the most precise human surgeons. According to the study published in the journal Science Translational Medicine, there are 45 million soft tissue surgeries performed each year in the United States.

Dr. Peter Kim, STAR’s creator, says “Imagine that you need a surgery, or your loved one needs a surgery. Wouldn’t it be critical to have the best surgeon and the best surgical techniques available?” Dr. Kim espouses, “Even though we take pride in our craft of doing surgical procedures, to have a machine or tool that works with us in ensuring better outcome safety and reducing complications—[there] would be a tremendous benefit.”

“Now driverless cars are coming into our lives,” explains Dr. Kim. “It started with self-parking, then a technology that tells you not to go into the wrong lane. Soon you have a car that can drive by itself.” Similarly, Dr. Kim and Dr. Fowler envision a time in the near future when surgical robots could go from assisting humans to being overseen by humans. Eventually, Dr. Kim says they may one day take over. After all, Dr Kim’s  “programmed the best surgeon’s techniques, based on consensus and physics, into the machine.”

The idea of full autonomy in the operating room and on the road raises a litany of ethical concerns, such as the acceptable failure rate of machines. The value proposition for self-driving cars is very clear – road safety. In 2015, there were approximately 35,000 road fatalities; self-driving cars will reduce that figure dramatically. However, what is unclear is what will be the new acceptable rate of fatalities with machines. Professor Amnon Shashua, of Hebrew University and founder of Mobileye, has struggled with this dilemma for years. “If you drop 35,000 fatalities down to 10,000 – even though from a rational point of view it sounds like a good thing, society will not live with that many people killed by a computer,” explains Dr. Shashua. While everyone would agree that zero failure is the most desired outcome in reality Shashua says, “this will never happen.” He elaborates, “What you need to show is that the probability of an accident drops by two to three orders of magnitude. If you drop [35,000 fatalities] down to 200, and those 200 are because of computer errors, then society will accept these robotic cars.”

Dr. Iyad Rahwan of MIT is much more to the point, “If we cannot engender trust in the new system, we risk the entire autonomous vehicle enterprise.” According to his research, “Most people want to live in a world where cars will minimize casualties. But everybody wants their own car to protect them at all costs.” Dr. Rahwan is referring to the Old Trolly Problem – does the machine save its driver or the pedestrian when encountered with a choice? Dr. Rahwan declares, “This is a big social dilemma. Who will buy a car that is programmed to kill them in some instances? Who will insure such a car?” Last May at the Paris Motor Show Christoph von Hugo, of Daimler Benz, emphatically answered: “If you know you can save at least one person, at least save that one. Save the one in the car.”

The ethics of unmanned systems and more will be discussed at the next RobotLab forum on “The Future of Autonomous Cars” with Steve Girsky formerly of General Motors – November 29th @ 6pm, WeWork Grand Central NYC, RSVP

The five senses of robotics

Healthy humans take for granted their five senses. In order to mold metal into perceiving machines, it requires a significant amount of engineers and capital. Already, we have handed over many of our faculties to embedded devices in our cars, homes, workplaces, hospitals, and governments. Even automation skeptics unwillingly trust the smart gadgets in their pockets with their lives.

Last week, General Motors stepped up its autonomous car effort by augmenting its artificial intelligence unit, Cruise Automation, with greater perception capabilities through the acquisition of LIDAR (Light Imaging, Detection, And Ranging) technology company Strobe. Cruise was purchased with great fanfare last year by GM for a billion dollars. Strobe’s unique value proposition is shrinking its optical arrays to the size of a microchip, thereby substantially reducing costs of a traditionally expensive sensor that is critical for autonomous vehicles measuring the distances of objects on the road. Cruise CEO Kyle Vogt wrote last week on Medium that “Strobe’s new chip-scale LIDAR technology will significantly enhance the capabilities of our self-driving cars. But perhaps more importantly, by collapsing the entire sensor down to a single chip, we’ll reduce the cost of each LIDAR on our self-driving cars by 99%.” 

GM is not the first Detroit automaker aiming to reduce the costs of sensors on the road; last year Ford invested $150 million in Velodyne, the leading LIDAR company on the market. Velodyne is best known for its rotation sensor that is often mistaken for a siren on top of the car. In describing the transaction, Raj Nair, Ford’s Executive Vice President, Product Development and Chief Technical Officer, said “From the very beginning of our autonomous vehicle program, we saw LIDAR as a key enabler due to its sensing capabilities and how it complements radar and cameras. Ford has a long-standing relationship with Velodyne and our investment is a clear sign of our commitment to making autonomous vehicles available for consumers around the world.” As the race heats up for competing perception technologies, LIDAR startups is already a crowded field with eight other companies (below) competing to become the standard vision for autonomous driving.

Walking the halls of Columbia University’s engineering school last week, I visited a number of the robotic labs working on the next generation of sensing technology. Dr. Peter Allen, Professor of Computer Science, is the founder of the Columbia Grasp Database, whimsically called GraspIt!, that enables robots to better recognize and pickup everyday objects. GraspIt! provides “an architecture to enable robotic grasp planning via shape completion.” The open source GraspIt! database has over 440,000 3D representations of household articles from varying viewpoints, which are used to train its “3D convolutional neural network (CNN).” According to the Lab’s IEEE paper published earlier this year, the CNN is able to serve up “a 2.5D pointcloud” capture of “a single point of view” of each item, which then “fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object” (see diagram below). As Dr. Allen demonstrated last week, the CNN is able to perform as successfully in live scenarios with a robots “seeing” an object for the first time, as it does in computer simulations.

Screen Shot 2017-10-22 at 11.29.47 AM.png

Taking a novel approach in utilizing their cloud-based data platform, Allen’s team now aims to help quadriplegics better navigate their world with assistive robots. Typically, a quadriplegic is reliant on human aids to perform even the most basic functions like eating and drinking, however Brain-Computer Interfaces (BCI) offer the promise of independence with a robot. Wearing a BCI helmet, Dr. Allen’s grad student was able to move a robot around the room by just looking at objects on screen. The object on the screen triggers electroencephalogram (EGG) waves that are admitted to the robot which translates the signals into pointcloud images on the database. According to their research, “Noninvasive BCI’s, which are very desirable from a medical and therapeutic perspective, are only able to deliver noisy, low-bandwidth signals, making their use in complex tasks difficult. To this end, we present a shared control online grasp planning framework using an advanced EEG-based interface…This online planning framework allows the user to direct the planner towards grasps that reflect their intent for using the grasped object by successively selecting grasps that approach the desired approach direction of the hand. The planner divides the grasping task into phases, and generates images that reflect the choices that the planner can make at each phase. The EEG interface is used to recognize the user’s preference among a set of options presented by the planner.” Screen Shot 2017-10-22 at 12.10.58 PM.png

While technologies like LIDAR and GraspIt! enable robots to better perceive the human world, in the basement of the SEAS Engineering Building at Columbia University, Dr. Matei Ciocarlie is developing an array of affordable tactile sensors for machines to touch & feel their environments. Humans have very complex multi-modal systems that are built through trial & error knowledge gained since birth. Dr. Ciocarlie ultimately aims to build a robotic gripper that has the capability of a human hand. Using light signals, Dr. Ciocarlie has demonstrated “sub-millimeter contact localization accuracy” of the mass object to determine the force applied to picking it up. At Columbia’s Robotic Manipulation and Mobility Lab (ROAM), Ciocarlie is tackling “one of the key challenges in robotic manipulation” by figuring out how “you reduce the complexity of the problem without losing versatility.” While demonstrating a variety of new grippers and force sensors which are being deployed in such hostile environments as the International Space Station and a human’s cluttered home, the most immediately promising innovation is Ciocarlie’s therapeutic robotic hand (shown below).

According to Ciocarlie’s paper: “Fully wearable hand rehabilitation and assistive devices could extend training and improve quality of life for patients affected by hand impairments. However, such devices must deliver meaningful manipulation capabilities in a small and lightweight package In experiments with stroke survivors, we measured the force levels needed to overcome various levels of spasticity and open the hand for grasping using the first of these configurations, and qualitatively demonstrated the ability to execute fingertip grasps using the second. Our results support the feasibility of developing future wearable devices able to assist a range of manipulation tasks.”

Across the ocean, Dr. Hossam Haick of Technion-Israel Institute of Technology has built an intelligent olfactory system that can diagnose cancer. Dr. Haick explains, “My college roommate had leukemia, and it made me want to see whether a sensor could be used for treatment. But then I realized early diagnosis could be as important as treatment itself.” Using an array of sensors composed of “gold nanoparticles or carbon nanotube” patients breath into a tube that detects cancer biomarkers through smell. “We send all the signals to a computer, and it will translate the odor into a signature that connects it to the disease we exposed to it,” says Dr. Haick. Last December, Haick’s AI reported an 86% accuracy in predicting cancers in more than 1,400 subjects in 17 countries. The accuracy increased with use of its neural network in specific disease cases. Haick’s machine could one day have better olfactory senses than canines, which have been proven to be able sniff out cancer.

When writing this post on robotic senses, I had several conversations with Alexa and I am always impressed with her auditory processing skills. It seems that the only area in which humans will exceed robots will be taste; however I am reminded of Dr. Hod Lipson’s food printer. As I watched Lipson’s concept video of the machine squirting, layering, pasting and even cooking something that resembled Willie Wonka’s “Everlasting Gobstopper,” I sat back in his Creative Machines Lab realizing that Sci-Fi is no longer fiction.

Want to know more about LIDAR technology and self-driving systems? Join RobotLab’s next forum on “The Future of Autonomous Cars” with Steve Girsky formerly of General Motors – November 29th @ 6pm, WeWork Grand Central NYC, RSVP

RoboBusiness 2017: What’s cooking in robotics?

Mike Toscano, the former president of the Association for Unmanned Vehicle Systems International, emphatically declared at the September RobotLab forum that “anyone who claims to know the future of the [robotics] industry is lying, I mean no one could’ve predicted the computing mobile revolution.” These words acted as a guiding principle when walking around RoboBusiness in Silicon Valley last week.

The many keynotes, pitches and exhibits in the Santa Clara Convention Center had the buzz of an industry racing towards mass adoption, similar to the early days of personal computing. The inflection point in the invention that changed the world, the PC, was 1995. During that year, Sun Microsystems released Java to developers with promise of “write once, publish anywhere,” followed weeks later by Microsoft’s consumer software package, Windows ’95. Greater accessibility led to full ubiquity and applications unthinkable by the original engineers. In many ways, the robot market is standing a few years before its own watershed moment.

In my last post, I highlighted mechanical musicians and painters, this week it is time to see what is cooking, literally, in robotics. Next year, startup Moley plans to introduce the “first fully-automated and integrated intelligent cooking robot,” priced under $100,000. It already has a slick video that is reminiscent of Lily’s Robotics’ rise to the headlines; needless to say Moley has created quite a stir in the culinary community.

Austin Gresham, executive chef at The Kitchen by Wolfgang Puck is very skeptical, “Professional chefs have to improvise constantly as they prepare dishes. If a recipe says to bake a potato for 25 minutes and the potatoes are more or less dense than the previous batch, then cooking times will vary. I would challenge any machine to make as good a mashed potato (from scratch).” Gresham’s challenge is really the crux of the matter, creativity is driven by human’s desire for food, without taste could a robot chef have the intuition to improvise?

Acting as a judge of the RoboBusiness Pitch Fire Competition, I met entrepreneurs undiscouraged by the market challenges ahead. In addition, throughout my Valley visit, I encountered five startups building commercial and consumer culinary applications. Any time this happens within such a short timespan, I stop and take notice. Automated restaurants seem to be a growing trend across the nation with a handful of upstarts on both coasts. Eatsa is a chain of quinoa-salad restaurants sans cashiers and servers. Customers order via mobile devices or on-site kiosks, picking up their ready dishes through an automated floor-to-ceiling lockbox fixture. However, behind the wall Eatsa has hourly workers manually preparing the salad bowls. Cafe X in San Francisco offers a completely automated experience with a robot-arm barista preparing, brewing and serving espressos, cappuccinos, and Americanos. After raising $5 million from venture investors, Cafe X plans to expand with robot kiosks throughout the city. Probably the most end-to-end automated restaurant concept I visited can be found tucked away on Berkeley University’s Global Campus called BBox by Nourish. BBox is currently running a trial on campus and planning to open its first store next year to conquer the multi-billion dollar breakfast market with egg sandwiches and gourmet coffee (see video below).

According to Nourish’s CEO Greg Becker, BBox will “reengineer the food ecosystem, from farm to mouth.” Henry Hu, Cafe X’s founder, also aims to revolutionize “the supply chain, recipes, maintenance, and customer support.” To date, the most successful robotic concept is Zume Pizza. Founder Julia Collins made headlines last year with her groundbreaking spin on the traditional pizzeria. Today she is taking on Dominos dollar for dollar in the San Francisco area, delivering pies in under 22 minutes. Collins, a former Chief Financial Officer of a Mexican restaurant chain, challenges the food industry, “Why don’t we just re-write the rules— forget about everything we learned about running a restaurant?” Already, Zume is serving hundreds of satisfied customers daily, proving at least with pizza it is possible to innovate.

“We realized we could automate more of the unsafe repetitive tasks of operating a kitchen using flexible, dynamic robots,” explains Collins, who currently employees over 50 human workers that do everything from software engineering to supervising the robots to delivering the pizza. “The humans that work at Zume are making dough from scratch, working with farmers to source products, recipe development—more collaborative, creative human tasks. [We have] lower rent costs because we don’t have a storefront; delivery only and lower labor costs. We reinvest those savings into locally sourced, responsibly farmed food.” Collins also boasts that her human workforce has access to free vision, dental, and health insurance due to the cost savings.

Even Shake Shack could have competition very soon as Google Ventures-backed Momentum Machines is launching an epicurean robot bistro in San Francisco’s chic SoMa district later next year. The machine that has been clocked at 400 burgers an hour, guarantees “to slice toppings, grill a patty, assemble, and bag the burger without any help from humans,” at prices that “everyone can afford.” Momentum’s proposition prompted former McDonald’s CEO Ed Rensi to controversially state that “it’s cheaper to buy a $35,000 robotic arm than it is to hire an employee who’s inefficient making $15 an hour bagging french fries.” Comments like Rensi’s do not further the industry, in fact it probably led to the controversy last month with the launch of Bodega, an automated convenience store that even enraged Lin-Manuel Miranda below.

The bad press was multiplied further by Elizabeth Segran’s article in Fast Company, which read, “the major downside to this concept — should it take off — is that it would put a lot of mom-and-pop stores out of business.” Founder Paul McDonald responded on Medium, “Rather than take away jobs, we hope Bodega will help create them. We see a future where anyone can own and operate a Bodega — delivering relevant items and a great retail experience to places no corner store would ever open.” While Bodega is not exactly a robotic concept, it is similar to the automated marketplace of AmazonGo with 10 computer vision sensors tracking the consumer and inventory management via a mobile checkout app. “We’re shrinking the store and putting it in a box,” said McDonald. The founder has publicly declared war on 7-Eleven’s 5,000 stores, in addition to the 4 million vending machines across the US. Realizing the pressures to innovate, last year 7-Eleven made history with the first drone Slurpee delivery. “Drone delivery is the ultimate convenience for our customers and these efforts create enormous opportunities to redefine convenience,” said Jesus H. Delgado-Jenkins, 7-Eleven EVP and Chief Merchandising Officer. “This delivery marks the first time a retailer has worked with a drone delivery company to transport immediate consumables from store to home. In the future, we plan to make the entire assortment in our stores available for delivery to customers in minutes. Our customers have demanding schedules, are on-the-go 24/7 and turn to us to help navigate the challenges of their daily lives. We look forward to working with Flirtey to deliver to our customers exactly what they need, whenever and wherever they need it.”

As mom & pop stores compete for market share, one wonders with more Kitchen OS concepts if home cooked meals will join the list of outdated cultural trends. Serenti Kitchen in San Francisco plans to bring the Keurig pod revolution to food with its proprietary machine that includes prepared culinary recipe pods that are dropped into a bowl and whipped to perfection by a robotic arm (see above). Serenti Founder Tim Chen was featured last year at the Smart Kitchen Summit, which reconvenes later this month in Seattle. Chen said, “We’re building something that’s quite hard, mechanically, so it’s more from a vision where we wanted to initially develop a machine that could cook, and make cooking easier and automate cooking for the home.” Initially Chen plans to target business catering, “In the near term, we need to focus on placing these machines where there’s the highest amount of density, which is right in the offices,” but long-term Serenti plans to join the appliance counter. Chen explained his inspiration, “Our Mom is a great cook, so they’ve watched her execute the meals. Then realized a lot of it is repetitive, and what recipes are, is essentially just a machine language.” Chen’s observations are shared by many in the IoT and culinary space, as this year’s finalists in the Smart Kitchen Summit include more robotic of inventions, such as Crepe Robot that automatically dispense, cook and flavors France’s favorite snack and GammaChef, a robotic appliance that promises like Serenti to whip up anything in a bowl. Clearly, these inventions will eventually lead to a redesign of the physical home kitchen space that is already crowded with appliances. Some innovators are even using robotic arms tucked away in cabinets and specialized drawers, ovens and refrigeration units that communicate seamlessly to serve up dinner.

The automated kitchen illuminated by Moley and others might be coming sooner than anyone expects; then again it could be a rotten egg. In almost every Sci-Fi movie and television show the kitchen is reduced to a replicator that synthesizes food to the wishes of the user. Three years ago, it was rumored that food-powerhouse Nestle was working on a machine that could produce nutritional supplements on demand, code name Iron Man. While Iron Man has yet to be released to the public, it does illustrate the convergence of 3D printing, robotics and kitchen appliances. While the Consumer Electronics Show is still months away, my appetite has just been whetted for more automated culinary treats, stay tuned!

Descartes revisited: Do robots think?

This past week, a robotic first happened: ABB’s Yumi robot conducted the Lucca Philharmonic Orchestra in Pisa, Italy. The dual-armed robot overshadowed even his vocal collaborator, Italian tenor Andrea Bocelli. While many will try to hype the performance as ushering in a new new era of mechanical musicians, Yumi’s artistic career was short-lived as it was part of the opening ceremonies of Italy’s First International Festival of Robotics.

Italian conductor Andrea Colombini said of his student, “The gestural nuances of a conductor have been fully reproduced at a level that was previously unthinkable to me. This is an incredible step forward, given the rigidity of gestures by previous robots. I imagine the robot could serve as an aid, perhaps to execute, in the absence of a conductor, the first rehearsal, before the director steps in to make the adjustments that result in the material and artistic interpretation of a work of music.”

Harold Cohen with his robot AARON

Yumi is not the first computer artist. In 1973, professor and artist, Harold Cohen created a software program called AARON – a mechanical painter. AARON’s works have been exhibited worldwide, including at the prestigious Venetian Biennale. Following Cohen’s lead, Dr Simon Colton of London’s Imperial College created “The Painting Fool,” with works on display in Paris’ prestigious Galerie Oberkampf in 2013. Colton wanted to test if he could cross the emotional threshold with an artistic Turning Test. Colton explained, “I realized that the Painting Fool was a very good mechanism for testing out all sorts of theories, such as what it means for software to be creative. The aim of the project is for the software itself to be taken seriously as a creative artist in its own right, one day.”

In June 2015, Google’s Brain AI research team took artistic theory to the next level by infusing its software with the ability to create a remarkably human-like quality of imagination. To do this, Google’s programmers took a cue from one of the most famous masters of all time, Leonardo da Vinci. Da Vinci suggested that aspiring artists should start by looking at stains or marks on walls to create visual fantasies. Google’s neural net did just that, translating the layers of the image into spots and blotches with new stylized painterly features (see examples below).

1) Google uploaded a photograph of a standard Southwestern scene:

2) The computer then translated the layers as below:

In describing his creation, Google Brain senior scientist Douglas Eck said this past March, “I don’t think that machines themselves just making art for art’s sake is as interesting as you might think. The question to ask is, can machines help us make a new kind of art?” The goal of Eck’s platform called Magenta is to enable laypeople (without talent) to design new kinds of music and art, similar to synthetic keyboards, drums and camera filters. Dr. Eck himself is an admittedly frustrated failed musician who hopes that Magenta will revolutionize the arts in the same way as the electric guitar. “The fun is in finding new ways to break it and extend it,” Eck said excitedly.

The artistic development and growth of these computer programs is remarkable. Cohen, who passed away last year, said in a 2010 lecture regarding AARON “with no further input from me, it can generate unlimited numbers of images, it’s a much better colorist than I ever was myself, and it typically does it all while I’m tucked up in bed.” Feeling proud, he later corrected himself, “Well, of course, I wrote the program. It isn’t quite right to say that the program simply follows the rules I gave it. The program is the rules.”

In reflecting on the societal implications of creative bots, one can not help to be reminded of the famous statement by philosopher René Decartes: “I think, therefore I am.” Challenging this idea for the robotic age, Professor Arai Noriko tested the thinking capabilities of robots. Noriko led a research team in 2011 at Japan’s National Institute of Informatics to build an artificial intelligence program smart enough to pass the  rigorous entrance exam of the University of Tokyo.

“Passing the exam is not really an important research issue, but setting a concrete goal is useful. We can compare the current state-of-the-art AI technology with 18-year-old students,” explained Dr. Noriko. The original goal set out by Noriko’steam was for the Todai robot (named for the University) to be admitted to college by 2021. At a Ted conference earlier this year, Noriko shocked the audience by revealing the news that Todai beat 80% of the students taking the exam, which consisted of seven sections, including math, English, science, and even a 600-word essay. Rather than celebrating, Noriko shared with the crowd her fear,”I was alarmed.”

Todai is able to search and process an immense amount of data, but unlike humans it does not read, even with 15 billion sentences already in its neural network. Noriko reminds us that “humans excel at pattern recognition, creative projects, and problem solving. We can read and understand.” However, she is deeply concerned that modern educational systems are more focused on facts and figures than creative reasoning, especially because humans could never compete with the fact-checking of an AI. Noriko pointed to the entrance exam as an example, the Todai robot failed to grasp a multiple choice question that would have been obvious even to young children. She tested her thesis at a local middle school, and was dumfounded when one-third of students couldn’t even “answer a simple reading comprehension question.” She concluded that in order for humans to compete with robots, “We have to think about a new type of education. ”

Cohen also wrestled with the question of a thinking robot and whether his computer program could ever have the emotional impact of a human artist like Monet or Picasso. In his words, to reach that kind of level a machine would have to “develop a sense of self.” Cohen professed that “if it doesn’t, it means that machines will never be creative in the same sense that humans are creative.” Cohen later qualified his remarks about robotic creativity, adding, “it doesn’t mean that machines have no part to play with respect to creativity.”

Noriko is much more to the point, “How we humans will coexist with AI is something we have to think about carefully, based on solid evidence. At the same time, we have to think in a hurry because time is running out.” John Cryan, CEO of Deutsche Bank, echoed Noriko’s sentiment at a banking conference last week. Cryan said “In our banks we have people behaving like robots doing mechanical things, tomorrow we’re going to have robots behaving like people. We have to find new ways of employing people and maybe people need to find new ways of spending their time.”

Reprogramming nature

Credit: Draper

Summer is not without its annoyances — mosquitos, wasps, and ants, to name a few. As the cool breeze of September pushes us back to work, labs across the country are reconvening tackling nature’s hardest problems. Sometimes forces that seem diametrically opposed come together in beautiful ways, like robotics infused into living organisms.

This past summer, researchers at Harvard and Arizona State University collaborated on successfully turning living E. Coli bacteria into a cellular robot, called a “ribocomputer.” By taking archived footage of movies, the Harvard scientists were able to successfully store the digital content on the bacteria that is most famous for making Chipotle customers violently ill. According to Seth Shipman, lead researcher at Harvard, this was the first time anyone has archived data onto a living organism.

In responding to the original article published in July in Nature, Julius Lucks, a bioengineer at Northwestern University, said that Shipman’s discovery will enable wider exploitation of DNA encoding. “What these papers represent is just how good we are getting at harnessing that power,” explained Lucks. The key to the discovery was Shipman’s ability to disguise the movie pixels into DNA’s four letter code: “molecules represented by the letters A,T,G and C—and synthesized that DNA. But instead of generating one long strand of code, they arranged it, along with other genetic elements, into short segments that looked like fragments of viral DNA.” Another important factor was E.coli‘s natural ability “to grab errant pieces of viral DNA and store them in its own genome—a way of keeping a chronological record of invaders. So when the researchers introduced the pieces of movie-turned-synthetic DNA—disguised as viral DNA—E. coli’s molecular machinery grabbed them and filed them away.”

Shipman used this methodology to eventually turn the cells into a computer that not only stores data, but actually perform logic-based decisions. Partnering with Alexander Green at Arizona State University’s Biodesign Institute, the two institutions collaborated on building their ribocomoputer which programmed bacteria with ribonucleic acid or RNA. According to Green, the “ribocomputer can evaluate up to a dozen inputs, make logic-based decisions using AND, OR, and NOT operations, and give the cell commands.” Green stated that this is the most complex biological computer created on a living cell to date.  The discovery by Green and Shipman means that cells could now be programmed to self-destruct if they sense the presence of cancer markers, or even heal the body from within by attacking foreign toxins.

Timothy Lu of MIT, called the discovery the beginning of the “golden age of circuit design.” Lu further said “The way that electrical engineers have gone about establishing design hierarchy or abstraction layers — I think that’s going to be really important for biology. ” In a recent IEEE article, Lucks cautioned readers about the discovery of perverting nature which can ultimately lead to a host of ethical considerations, “I don’t think anybody would really argue that it’s unethical to do this in E. coli. But as you go up in the chain [of living organisms], it gets more interesting from an ethical point of view.”

Nature has the inspiration for numerous discoveries in modern robotics, and has even created its own field of biomimicry. However, manipulating living organisms according to the whims of humans is just beginning to take shape. A couple of years ago, Hong Liang, a researcher at Texas A&M University, outfitted a cockroach with 3g backpack-like device that had a microprocessor, lithium battery, camera sensor, and electrical/nerve control system. Liang then used her make-shift insect robo-suit to remotely drive the waterbug through a maze.

When asked by the Guardian, what prompted Laing to utilize bugs as robots, she explained, “Insects can do things a robot cannot. They can go into small places, sense the environment, and if there’s movement, from a predator say, they can escape much better than a system designed by a human. We wanted to find ways to work with them.”

A cockroach outfitted with front and rear electrodes as well as a “backpack” for wireless control.
Credit: Alper Bozkurt, North Carolina State University

Liang believes that robo-roaches could be especially useful in disaster recovery situations that maximize the size of the insect along with its endurance. Liang says that some cockroaches can carry five times their own bodyweight, but the heavier the load, the greater the toll it takes on their performance. “We did an endurance test and they do get tired,” Liang explained. “We put them on a treadmill for a minute and then let them rest. If the backpack is lighter, they can go on for longer.” Laing has inspired other labs to work with different species of insects.

Draper, the US defense contractor, is working on its own insect robot by turning live dragonflies into controllable undetected drones. The DragonflEye Project is a deviation from the technique developed by Laing, as it uses light to steer neurons instead of electrical nerve stimulation. According to Jesse Wheeler, the project lead for Draper, he says that this methodology is like “a joystick that tells the system how to coordinate flight activities.” Through Wheeler’s “joystick” he is able to control and steer the wings inflight and program coordinates to the bug for mission directions via his own attached micro backpack that includes a guidance system, solar energy cells, navigation cells, and optical stimulation.

Draper believes that swarms of digitally enhanced insects might hold the key to national defense as locusts and bees have been programmed to identify scents, such as chemical explosives. The critters could be eventually programmed to collect and analyze samples for homeland security, in addition to obvious surveillance opportunities. Liang boasts that her cyborg roaches are “more versatile and flexible, and they require less control,” than traditional robots. However, Liang also reminds us that “they’re more real” as ultimately living organisms even with mechanical backpacks are not machines.

Author’s note: This topic and more will be discussed at our next RobotLabNYC event in one week on September 19th at 6pm, “Investing In Unmanned Systems,” with experts from NASA, AUVSI, and Genius NY.

Page 3 of 4
1 2 3 4