Page 394 of 430
1 392 393 394 395 396 430

Swarming drones could help fight Europe’s megafires

More than 700,000 hectares of land in the EU were destroyed by forest fires between January and September 2017. Image credit – CC0 Public Domain
by Rob Coppinger

Swarms of firefighting drones could one day be deployed to tackle hugely destructive megafires that are becoming increasingly frequent in the Mediterranean region because of climate change, arson and poor landscape management.

It’s one of a number of initiatives looking at how best to fight large fires from the air – a challenge that’s becoming more and more common.

A 2017 report on forest fires by the EU’s Joint Research Centre said that the year would ‘likely be remembered as one of the most devastating wildfire seasons in Europe since records began’, after the destruction of nearly 700,000 hectares of land in the EU by early September.

Such fires are dangerous not only for people who live in the area but also for the crews of people whose job it is to put the fires out. But using intelligent robots to scout the area and drop water can allow humans to stand further back from the danger zone, only looking at the drones’ data to make decisions from the safety of a command and control centre.

Because drones can fly day or night and gain rapid access to previously inaccessible urban or rural fires they can help to save both the lives of the public and first responders.

Torrential

Multiple autonomous drones dropping 600 litres of water every minute during the night while other unmanned vehicles refill to repeat the attack on a raging fire is the vision of Spanish company Drone Hopper. Despite this torrential approach, ‘we are not meant to be competitors with the airplanes and helicopters, we want to be complementary,’ Drone Hopper’s chief executive officer, Pablo Flores, said.

Their drone uses heat cameras to locate the fire, analyse it, send back the data, and identify what type of fire it is. At just over a metre and a half in length, it can be deployed from an aircraft, as well as a ground vehicle.

The drone is like a helicopter and can hover directly over a specific burning area, but it has many propellers. Once over its target it will release its liquid cargo as a mist designed specifically for the fire type identified.

A mist is good at fighting fire because it cools the area by evaporation and it blocks the transfer of heat to anything flammable nearby. To create the right type of mist, the Drone Hopper uses a proprietary magnetic system and the jet wash from its many propellers to direct the released water and nebulise it.

Flores wants to offer his drone, which is still in development, to local authorities for firefighting. ‘They can’t buy a $30 million airplane, but can have this platform and have their own means to (tackle a fire).’ He says that the Drone Hopper UAV costs five times less per litre than a water tanker aircraft.

Spanish company Drone Hopper wants to create a fleet of drones that can drop 600 litres of water every minute during the night. Image credit – Drone Hopper

But, there is a regulatory obstacle. At the moment, it hasn’t been proven that drones can reliably act autonomously, so national rules generally require each one to have a human remote pilot.

Dr Nazim Kemal Ure, an assistant professor in the aerospace department at Istanbul Technical University in Turkey, said: ‘With multiple autonomous systems many things can be achieved much more quickly.’

Swarming

He is developing a way of coordinating drones that he hopes could contribute to a change in regulation. By the end of the year, Dr Ure expects to be field testing autonomous drones and their swarming algorithms, developed under the DUF project.

Like Drone Hopper, ‘we are detecting the fire by using vision,’ Dr Ure explained. In initial testing the image processing will not be done by the drones, but eventually in real-world flight-testing the algorithms will be installed onboard.

His drones would fly over a burning area and, by examining the vegetation and wind direction and other factors, predict the fire’s spread and direction. With that information they would then precisely drop retardant to stop the fire.

Dr Ure added that further flight-testing may see cooperation with the Turkish government’s Ministry of Forestry and involve a controlled fire.

However, there is work yet to be done to improve the computer-generated fire images in the simulated environment they are using for training the artificial intelligence. ‘Our models are, in the graphical parts, not state-of-the-art,’ said Dr Ure. He wants to have ‘hyper-realistic’ fire for the drones’ vision analysis software to learn from. For him, that will help ensure the drones will operate well in the real world.

And in the real world, Dr Ure sees many other applications. ‘I would like to extend this algorithm to other scenarios such as search and rescue and planetary exploration,’ he said.

Types of forest fires
Ground fires occur 25 to 50 cm underground and move slowly, burning through peat and roots. They are notoriously difficult to put out and, if the conditions are right, they can smoulder through the winter and then move above ground in spring.

Surface fires move at a speed of between 3 and 300 metres per minute but burn only the lower vegetation and leave the trees unaffected. Of all the fires, they cause the least damage and are usually easy to put out.

Ladder fires climb up the taller trees and engulf smaller vegetation. Vines and invasive plants help the fire gain momentum.

Crown fires burn trees all the way to the canopy and are the hottest and most dangerous of wildfires. They can spread quickly and are very difficult to put out, partly due to the height of the flames. They can spread beyond natural firebreaks such as rivers through a process called spotting, where wind or hot air carries a piece of burning wood elsewhere and starts a new fire.

The research in this article is funded by the EU. If you liked this article, please consider sharing it on social media.

More info
Drone Hopper
DUF

For over 25 years, Polyconn has privately labeled and manufactured customized products.

We can produce pneumatic hose and other pneumatic products to match your exact specifications. Our proficient engineering and quality management, coupled with outstanding customer service means Polyconn delivers high quality pneumatic components at competitive prices. Place your order or contact Polyconn for more information on our pneumatic hoses, pneumatic manifolds, Duratec® Pipe or any of our other top quality pneumatic products and components.

Robots in Depth with Justin Werfel

In this episode of Robots in Depth, Per Sjöborg speaks with Justin Werfel, senior research scientist at Harvard’s Wyss Institute for Biologically Inspired Engineering.

Justin talks about what termites can teach us about creating impressive structures using autonomous swarms of robots, as demonstrated in the Termes project. We also hear how Justin was drawn to robotics by the balance between theoretical and practical work.

Uber’s fatal crash

Credit: Uber

By Bryant Walker Smith

An automated vehicle in Uber’s fleet fatally struck a woman crossing a street in Arizona. A few points pending more information:

  1. This sad incident will test whether Uber is becoming a trustworthy company. Uber needs to be unflinchingly candid and unfailingly helpful in the multiple investigations that are likely to result. It shouldn’t even touch its onboard and offboard systems unless credible observers are present. In this crash, a multitude of data will likely be available to help understand what happened—but only if those data can be believed.
  2. The circumstances of this crash certainly suggest that something went wrong. Was the vehicle traveling at a speed appropriate for the conditions? Did the automated driving system and the safety driver recognize the victim, predict her path, and respond appropriately? The lawfulness of the victim’s actions is only marginally relevant to the technical performance of Uber’s testing system (which includes both vehicle and driver).
  3. Regardless of whether this crash was unavoidable, serious developers and regulators of automated driving systems understand that tragedies will occur. Automated driving is a challenging work in progress that may never be perfected, and I would be skeptical of anyone who claims that automated driving is a panacea—or who expresses shock that it is not.
  4. However, this incident was uncomfortably soon in the history of automated driving. In the United States, there’s about one fatality for every 100 million vehicle miles traveled, and automated vehicles are nowhere close to reaching this many real-world miles. This arguably first fatality may not tell us much statistically, but neither is it reassuring.
  5. On the same day that this tragic crash happened, about 100 other people died in crashes in the United States alone. Although they won’t make international news, their deaths are also tragedies. And most of them will have died because of human recklessness, including speeding, drinking, aggression, and distraction. This is a public health crisis, and automated driving may play an important role (though by no means the only role) in addressing it. In short: We should remain concerned about automated driving but terrified about conventional driving.
  6. Technologies are understood through stories—both good and bad. I don’t know how this tragic story will play out in the fickle public. Surprisingly, Tesla’s fatal 2016 crash doesn’t seem to have dramatically shifted attitudes toward driving technologies. But that was Tesla, and this is Uber. And whereas few people use Autopilot, almost everyone is a pedestrian.
  7. The current tragedy includes a long prologue that does not look good. In 2016, Uber refused to comply with California’s automated vehicle law, the state revoked the company’s vehicle registrations, Arizona’s governor tweeted “This is what OVER-regulation looks like! #ditchcalifornia,” and Uber trucked its vehicles down to his state.
  8. Developers need to show that they are worthy of the tremendous trust that regulators and the public necessarily place in them. They need to explain what they’re doing, why they believe it is reasonably safe, and why we should believe them. They need to candidly acknowledge their challenges and failures, and they need to readily mitigate the harms caused by those failures. I expand on these principles in a paper (“The Trustworthy Company”) forthcoming at newlypossible.org.

European Robotics League winners revealed at #ERF2018

Award winners in robot competitions held by the were named on 14 March 2018, during this year’s European Robotics Forum (ERF), held in Tampere, Finland on 13–15 March.

Awards for the ERL’s 2017-18 season were presented at a Gala Dinner to winning teams that took part in all ERL competitions: Service Robots (ERL-SR), Industry Robots (ERL-IR) and Emergency Robots (ERL-ER).

ERL-SR is for robots that could provide assistance in homes, particularly for people with reduced mobility. ERL-ER is for robots in simulated emergency situations and ERL-IR tackles automation in industry.

Dozens of teams from around Europe took part in the 2017–18 ERL competitions, which stimulate innovation by and collaboration among robotics researchers by setting tasks in simulated real-life conditions, for completion against the clock.

The challenges include understanding natural speech, finding and retrieving objects, greeting visitors, supplying medical aid kits, and stopping simulated radioactive leaks. The teams’ technical approaches could find their way into future commercial robots and could be suitable for a wide range of non-robotics uses in all areas of life.

ERL teams are ranked on their end-of-year scores for various task and functionality challenges, using the best two participations in tournaments. The following teams were awarded on stage during the ERF2018 Awards Gala Dinner.

ERL Service Robots:

  • homer@UniKoblenz, Germany, won the first prize for: Task Benchmark 1 “Getting to Know My Home”; Task Benchmark 2 “Welcoming Visitors”; Task Benchmark 3  “Catering for Granny Annie’s Comfort”; Task Benchmark 5 “General Purpose Service Robot (GPSR)”;
  • HEARTS, Bristol, UK, won the first prize for: Task Benchmark 2 “Welcoming Visitors”; Functionality Benchmark 3 “Speech Recognition”;
  • IRI@ERL, Barcelona, Spain, won the first prize for: Task Benchmark 2 “Welcoming Visitors”; Task Benchmark 3 “Catering for Granny Annie’s Comfort”;
  • RoboticsLab UC3M, Madrid, Spain, won the first prize for Task Benchmark 4 “Visit My Home”;
  • SocRob@Home, Lisbon, Portugal, won the first prize for: Task Benchmark 1 “Getting to Know My Home”.

European Robotics League Service Awardees 2017-2018 and Coordinators. Credits: Visual Outcasts

ERL Industry Robots:

  • b-it-bots Bonn-Rhein-Sieg University, Bonn, Germany, won the first prize for: Functionality Benchmark 4 “Navigation Functionality”.

European Robotics League Industry Awardees 2017-2018 and Coordinators. Credits: Visual Outcasts

ERL Emergency Robots:

  • IMM – Janusz Bedzowski won the first prize for: Task Benchmark 2 (Land + Air) “Survey the building and search for missing workers”; Functionality Benchmark 1 “2D Mapping Functionality” (Land + Air);

The Grand Challenge Task Benchmark 1 (land + sea + air) was won by:

  • Universitat de Girona – Eric Pairet who won also the first prize for: Task Benchmark 4 (Land + Sea) “Stem the leak”; Functionality Benchmark 3 (Sea) “Object Detection”; Task Benchmark 3 (Sea + Air) “Pipe inspection & search for missing workers”;
  • Telerob – Andreas Ciossek who won also the first prize for: Task Benchmark 4 (Land + Sea) “Stem the leak”; Functionality Benchmark 2 (Land + Air) “Object Recognition”;
  • ISEP/INESC – Alfredo Martins who won also the first prize for: Task Benchmark 3 (Sea + Air) “Pipe inspection and search for missing workers”; Functionality Benchmark 2 “Object Recognition” (Land + Air).

European Robotics League Emergency Awardees 2017-2018 and Coordinators. Credits: Visual Outcasts

The ERL-ER prizes for task challenges were awarded during the Awards Ceremony for the emergency robots competition held at Piombino, Italy on 15-23 September 2017.

European Robotics League
The European Robotics League (ERL) is the successor to the RoCKIn, euRathlon and EuRoC robotics competitions, funded by the EU and designed to foster scientific progress and innovation in cognitive systems and robotics. The ERL is funded by the European Union’s Horizon 2020 research and innovation programme and runs competitions for service (ERL-SR), industrial (ERL-IR) and emergency robots (ERL-ER). See:
The latest ERL-SR scores can be found here.

The latest ERL-IR scores can be found here

The latest ERL-ER scores can be found here.

SXSW 2018: Protect AI, robots, cars (and us) from bias

As Mark Hamill humorously shared the behind-the-scenes of “Star Wars: The Last Jedi” with a packed SXSW audience, two floors below on the exhibit floor Universal Robots recreated General Grievous’ famed light saber battles. The battling machines were steps away from a twelve foot dancing Kuka robot and an automated coffee dispensary. Somehow the famed interactive festival known for its late night drinking, dancing and concerts had a very mechanical feel this year. Everywhere debates ensued between utopian tech visionaries and dystopia-fearing humanists.

Even my panel on “Investing In The Autonomy Economy” took a very social turn when discussing the opportunities of utilizing robots for the growing aging population. Eric Daimler (formerly of the Obama White House) raised concerns about AI bias affecting the well being of seniors. Agreeing, Dan Burstein (partner at Millennium Tech Value Partners) nervously expressed that ‘AI is everywhere, in everything, and the USA has no other way to care for this exploding demographic except with machines.’ Daimler explained that “AI is very good at perception, just not context;” until this is solved it could be a very dangerous problem worldwide.

Last year at a Google conference on the relationship between humans and AI, the company’s senior vice president of engineering, John Giannandrea, warned, “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased. It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems.” Similar to Daimler’s anxiety about AI and healthcare, Giannandrea exclaimed that “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.”


One of the most famous illustrations of how quickly human bias influences computer actions is Tay, the Microsoft customer service chatbot on Twitter. It took only twenty-four hours for Tay to develop a Nazi persona leading to more than ninety thousand hate-filled tweets. Tay swiftly calculated that hate on social media equals popularity. In explaining its failed experiment to Business Insider Microsoft stated via email: “The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”

While Tay’s real impact was benign, it raises serious questions of the implications of embedding AI into machines and society. In its Pulitzer Prize-winning article, ProPublica.org uncovered that a widely distributed US criminal justice software called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) was racially biased in scoring the risk levels of convicted felons to recommit crimes. ProPublica discovered that black defendants in Florida, “were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism” by the AI. Northpointe, the company that created COMPAS, released its own report that disputed ProPublica’s findings but it refused to pull back the curtain on its training data, keeping the algorithms hidden in a “black box.” In a statement released to the New York Times, Northpointe’s spokesperson argued, “The key to our product is the algorithms, and they’re proprietary. We’ve created them, and we don’t release them because it’s certainly a core piece of our business.” 

The dispute between Northpointe and ProPublica raises the question of transparency and the auditing of data by an independent arbitrator to protect against bias. Cathy O’Neil, a former Barnard professor and analyst at D.E. Shaw, thinks a lot about safeguarding ordinary Americans from biased AI. In her book, Weapons of Math Destruction, she cautions that big corporate America is too willing to hand over the wheel to the algorithms without fully assessing the risks or implementing any oversight monitoring. “[Algorithms] replace human processes, but they’re not held to the same standards. People trust them too much,” declares O’Neil. Understanding the high stakes and lack of regulatory oversight by the current federal government, O’Neil left her high-paying Wall Street job to start a software auditing firm, O’Neil Risk Consulting & Algorithmic Auditing. In an interview with MIT Technology Review last summer, O’Neil frustratingly expressed that companies are more interested in the bottom line than protecting their employees, customers, and families from bias, “I’ll be honest with you. I have no clients right now.”

Most of the success of deconstructing “black boxes” is happening today at the US Department of Defense. DARPA has been funding the research of Dr. David Gunning to develop Explainable Artificial Intelligence (XAI). Understanding its own AI and that of foreign governments could be a huge advantage for America’s cyber military units. At the same time, like many DARPA-funded projects, civilian opportunities could offer societal benefits. According to Gunning’s statement, online XAI aims to “produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.” XAI plans to work with developers and user interfaces designers to foster “useful explanation dialogues for the end user,” to know when to trust or question the AI-generated data. 

Besides DARPA, many large technology companies and universities are starting to create think tanks, conferences and policy groups to develop standards that test AI bias. The results have been startling – ranging from computer vision sensors that negatively identify people of color to gender bias in employment management software to blatant racism of natural language processing systems to security robots that run over kids identified mistakenly as threats. As an example of how training data affects outcomes, when Google first released its image processing software, the AI identified photos of African Americans as “gorillas,” because the engineers failed to provide enough minority examples into the neural network.

Ultimately artificial intelligence reflects the people that program it, as every human being brings with him his own experiences that shape personal biases. According to Kathleen Walch, host of AI Today podcast, “If the researchers and developers developing our AI systems are themselves lacking diversity, then the problems that AI systems solve and training data used both become biased based on what these data scientists feed into AI training data,” Walch advocates that hiring diversity can bring “about different ways of thinking, different ethics and different mindsets. Together, this creates more diverse and less biased AI systems. This will result in more representative data models, diverse and different problems for AI solutions to solve, and different use cases feed to these systems if there is a more diverse group feeding that information.”

Before leaving SXSW, I attended a panel hosted by the IEEE on “Algorithms, Unconscious Bias & AI,” amazingly all led by female panelists including one person of color. Hiring basis became a big theme of their discussion. Following the talk, I hopped into my Uber and pleasantly rode to the airport reflecting on a statement made earlier in the day by John Krafcik, Cheif Executive, of Waymo. Krafcik boasted that Waymo’s mission is to build “the world’s most experienced driver,” I just hope that the training data is not from New York City cabbies.

10 tech-savvy companies on the hunt for AI/robotics talent and IP

Tencent, Alibaba, Baidu and JD.com from China are in a global competition with Google/Alphabet, Apple, Facebook, Walmart and Amazon from the USA and SoftBank from Japan. All are agressively searching for talent, intellectual property, market share, logistics and supply chain technology, and presence all around the world.

These leading tech-savvy companies have many things in common. Foremost, they are all in pursuit of global growth and the funding, technology and talent to propel that growth. And they all are investing in voice assistance and other forms of AI and robotics.

Although Amazon is leading the way with its ecosystem surrounding its AI assistant Alexa, each of the others either has or are developing competing systems of equal or greater capability… think OK Google, Siri and Apple’s new Homepod and Cortana or, in China, Alibaba’s Tmall Genie, Baidu’s Little Fish and JD’s DingDong.

Also, they are all moving toward providing AI as a service.

  • Baidu (NASDAQ:BIDU) is China’s primary search source and also provides Internet-related services and products as well as targeted advertising, transaction services and a video platform. Baidu is heavily investing in researching deep learning, computer vision, speech recognition and synthesis, natural language understanding, data mining and knowledge discovery, business intelligence, artificial general intelligence, high performance computing, robotics and autonomous driving (at their new self-driving lab in Silicon Valley).
  • Alibaba (NYSE:BABA) is a multi-national China-based e-commerce retailer, payment and technology conglomerate, cloud provider, whose two shopping malls (Tmall and Taobao) have over 1 billion combined active users and are supported by a budding logistics network. Alibaba’s AI-powered platform (which it uses internally for its shopping malls and logistics processing) was recently rolled out in Kuala Lumpur to support smart cities in their digital transformation. It analyzes large data volumes extracted from various sources in an urban environment, through video, image, and speech recognition. The system then uses machine learning to provide insights for city administrators to improve operational efficiencies and monitor security risks.
  • Tencent (HKG:0700) is a Chinese provider of Internet and cloud-related services and products, entertainment, music services, AI, real estate and social media including WeChat (which recently hit 1 billion users). More than 35% of WeChat users spend over four hours a day on the service compared to the little more than an hour a day spent on Facebook, Instagram, Snapchat and Twitter combined. Tencent has set up AI labs in Shenzhen and Seattle and is researching voice and image recognition systems and transforming what they’ve learned into apps and algorithms to keep their users informed and attentive.

NOTE: Baidu, Alibaba and Tencent make up B A T, the acronym given to the trio of main competitors in China’s quantum computer and machine learning research. In addition to labs in China, each has a Silicon Valley research center. Funding and incentives are provided by the Chinese government. The three BAT companies already collect and analyze huge amounts of data from their e-commerce transactions, mobile gaming, online search and payments to social media, video streaming and on-demand services such as ride-sharing and food deliveries. With quantum computing, they will be able to sift through massive data streams faster and better than with existing supercomputers.

  • JD.Com (NASDAQ:JD) is a Chinese e-commerce competitor with about half the user base of Alibaba yet with very progressive logistics and infrastructure programs. JD (Jingdong) is testing robotic delivery services, operating driverless delivery trucks and building drone delivery ports. JD operates 7 fulfillment centers and 405 warehouses in China. Last month it raised $2.5 billion for its JD Logistics subsidiary to build out and expand their logistics network.
  • SoftBank (TYO:9984) is a Japanese telecom conglomerate. Softbank is also the instigator of the SoftBank Vision Fund which is investing massive amounts ($98 bn) in technologies and entrepreneurs pioneering the future through a wide range of sectors: IoT, AI, robotics, mobile applications and computing, and infrastructure, cloud technologies and software. SoftBank, with it’s funding partners Apple, Qualcomm and various sovereign wealth funds, wants to invest another $900 billion in 1,000 AI and robotics companies in the next decade. SoftBank is also a partner with Alibaba and Foxconn to produce and market Pepper and Nao robots.
  • Google/Alphabet (NASDAQ:GOOG) is a Silicon Valley search engine and Internet products company with a stable of forthcoming AI ventures such as Waymo, Verb Surgical and Nest along with consumer products like Google Home, Android phones and Chromebook computers. Google is leveraging their data, processing power, and talent into an array of AI-based apps, processes and products. Their foray into robotics hardware has resulted in much valuable research but all of the units have either been sold off or closed (except for Boston Dynamics and Shaft which are held up from sale by government regulators). Although still a leader in machine learning, Google is finding much competition from their Chinese competitors.
  • Apple (NASDAQ:AAPL) is Apple, a Silicon Valley designer, manufacturer and marketer of phones, media and hardware devices and provider of software, services and digital content. Apple is the world’s largest information technology company by revenue and the world’s second-largest mobile phone manufacturer after Samsung with annual revenue of $229 billion. Building out Siri from the virtual world into the consumer product world with their new Homepod is off to a late start.
  • Facebook (NASDAQ:FB) is also a Silicon Valley-based Internet phenomena with products that include Facebook, Instagram, Messenger, WhatsApp and Oculus. Facebook has over 2.2 billion active users. Their investments in AI appear to be focused on developing a virtual (or physical) assistant. Their acquisition of Ozlo to help Messenger build out a more elaborate virtual assistant for users is an example.
  • Walmart (NYSE:WMT) is a global retailer with wholesale facilities, logistics and distribution centers all around the world. Walmart operates over 11,000 stores under 59 names in 28 countries and e-commerce sites in 11 countries. It grosses over $480 billion annually and employs over 2.3 million workers. As Walmart increases its online e-commerce market share while simultaneously changing practices to provide better product transparency (particularly in and faster material handling at its stores and distribution centers, it too is on a talent hunt for roboticists and AI/machine learning people and providers.
  • Amazon (NASDAQ:AMZN) Amazon is the leading e-commerce seller of products, supply chain services, AI, and cloud services that is copied and competed with around the world. Amazon accounts for ~4% of all retail and ~44% of all e-commerce spending in the US. Amazon’s supply chain and logistics facilities use more than 60,000 robots in its various warehouses and distribution centers, and its cloud services, which not only services Amazon, provides on-demand cloud computing platforms to companies and governments on a subscription basis. Amazon’s Echo/Alexa home assistant has started to include capabilities like a display, camera and alarm clock, security cameras, and even a fashion advisor. It is combining all these different incremental parts to build a smart home robot as they become viable and front-ended by the Alexa voice assistant.

Amazon is the company to watch in terms of early innovation. Others follow and emulate; Amazon quietly goes forward and China is on its horizon. CBInsights had two interesting comments on the subject as can be seen in these two charts:

CBInsights looked at which peers companies were talked about in financial reports and calls and found that Amazon doesn’t mention competitors. But Amazon mentions of China are up 57% over 2016.

NOTE: There are no Europeans in this list nor in the Top 15 Alexa Sites. Large robotics firms in Germany and Italy have been sold to China. ARM, the British chip-maker, was sold to SoftBank and DeepMind, the UK AI wonder, was picked up by Google. Many fear that Europe may excel at manufacturing but don’t have protectionist impulses to fend off (and keep up with) America or China and to know that smart manufacturing and smart cars – in fact smart everything – is the new game. Recently European leadership has shown fear in the use of and connection to cloud and analytics platforms in the age of IoT – even though Europeans pioneered the term Industry 4.0.

A major talent-hunting event is the big NVIDIA GPU tech conference being held in San Jose March 26-29. Over 8,000 industry professionals of all types are planning to attend this job fair and place to learn about AI, machine learning and deep learning.

Infrastructure

Common to all is e-commerce and the systems that pick, pack, ship and deliver all the goods. Thus, in addition to investments and interest in cloud platforms, super computing and AI, there is a global explosion in warehouse construction and reconfiguration for automation. According to Cushman & Wakefield, U.S. developers added almost 1 billion square feet of warehouse space from 2013 to 2017, a 2X increase over the previous 5 years. It’s harder to get information for China but news stories indicate similar if not greater growth, new forms of automation and labor shortages.

The constant lament heard in the U.S. is captured (and presumed to be relevant worldwide) is this quote from a fulfillment executive:

“A big part of our strategy is how do we make the current employees we have more productive and to reduce the requirement for more labor at peak times.”

Providing warehouse labor is a big business because workers are hard to find and turnover is more than 10% per month. Hence the simultaneous investment in robotics and smart warehousing systems to maximize human effort and reduce costly errors and turnover.

Warehousing has always been as automated as possible, particularly in pallet and box handling, but as labor has become more scarce and costly, as robotic systems have improved and costs been reduced, and as the number of shipments has increased exponentially due to e-commerce, the nature of material handling and fulfillment has radically changed. Hence the need for mechanical assistance.

But this is fodder for another article to follow shortly on the global inroads being made in fulfillment and material handling. Stay tuned.

Pipe-crawling robot will help decommission DOE nuclear facility

A pair of autonomous robots developed by Carnegie Mellon University's Robotics Institute will soon be driving through miles of pipes at the U.S. Department of Energy's former uranium enrichment plant in Piketon, Ohio, to identify uranium deposits on pipe walls.

Economist predicts job loss to machines, but sees long-term hope

Are we bumping up against the "Robocalypse," when automation sweeps industry and replaces human workers with machines? BU economist Pascual Restrepo says that interpretation is too gloomy, although his recent research, posted online by the National Bureau of Economic Research, reveals that the adoption of just one industrial robot eliminates nearly six jobs in a community.

#256: Socially Assistive Robots, with Maja Matarić



In this episode, Audrow Nash speaks with Maja Matarić, a professor at the University of Southern California and the Chief Scientific Officer of Embodied, about socially assistive robotics. Socially assistive robotics aims to endow robots with the ability to help people through individual non-contact assistance in convalescence, rehabilitation, training, and education. For example, a robot could help a child on the autism spectrum to connect to more neurotypical children and could help to motivate a stroke victim to follow their exercise routine for rehabilitation (see the videos below). In this interview, Matarić discusses the care gap in health care, how her work leverages research in psychology to make robots engaging, and opportunities in socially assistive robotics for entrepreneurship.

A short video about how personalized robots might act as a “social bridge” between a child on the autism spectrum and a more neurotypical child.

 

A short video about how a robot could assist stroke victims in their recovery.

 

Maja Matarić

Maja Matarić is professor and Chan Soon-Shiong chair in Computer Science Department, Neuroscience Program, and the Department of Pediatrics at the University of Southern California, founding director of the USC Robotics and Autonomous Systems Center (RASC), co-director of the USC Robotics Research Lab and Vice Dean for Research in the USC Viterbi School of Engineering. She received her PhD in Computer Science and Artificial Intelligence from MIT, MS in Computer Science from MIT, and BS in Computer Science from the University of Kansas. 

 

 

Links

Page 394 of 430
1 392 393 394 395 396 430