Archive 31.05.2016

The Drone Center’s Weekly Roundup: 5/30/16

An MQ-9 Reaper flies at an air show demonstration at Cannon Air Base, NM. Cannon is home to the Air Force Special Operations Command’s drone operations. Credit: Tech. Sgt. Manuel J. Martinez / US Air Force
An MQ-9 Reaper flies at an air show demonstration at Cannon Air Base, NM. Cannon is home to the Air Force Special Operations Command’s drone operations. Credit: Tech. Sgt. Manuel J. Martinez / US Air Force

At the Center for the Study of the Drone

As more commercial drone users take to the sky, insurers are struggling to develop policies to cover the eventualities of flying. Meanwhile, insurance companies also want to fly drones themselves for appraisals and damage assessments. We spoke with Tom Karol, general counsel-federal for the National Association of Mutual Insurance Companies, to learn about the uncertain landscape that is the drone insurance industry.

News

Pakistan criticized the U.S. government for a drone strike that killed Mullah Akhtar Mansour, the leader of the Afghan Taliban. In a statement, Sartaj Aziz, foreign affairs special adviser to Pakistani Prime Minister Nawaz Sharif, said that the strike undermined attempts to negotiate a peace deal with the Taliban. “Pakistan believes that politically negotiated settlement remains the most viable option for bringing lasting peace to Afghanistan,” Aziz said. (Wall Street Journal)

Commentary, Analysis and Art

The editorial board at the New York Times argues that the political and strategic consequences of a drone strike are not always immediately apparent.

Also at the New York Times, Vanda Felbab-Brown contends that the drone strike that killed Mullah Mansour “may create more difficulties than it solves.”

At Lawfare, Robert Chesney considers what it would mean if the strike against Mullah Mansour had not been conducted under the Authorization for Use of Military Force.

At the National Interest, Elsa Kania and Kenneth W. Allen provide a detailed summary of China’s push to develop military drones.

At Slate, Stephen E. Henderson writes that law enforcement officers have as much right under the law to fly a drone as a private citizen.

Also at Slate, Faine Greenwood offers an etiquette guide to flying a drone.

Jarrod Hodgson, an ecologist at the University of Adelaide in Australia, is calling for scientists and hobbyists to follow a code of conduct when using drones for wildlife research. (ABC News)

In testimony before the House Committee on Homeland Security, Subcommittee on Border and Maritime Security, Rebecca Gambler, the director of homeland security and justice at the Government Accountability Office, reviewed the Customs and Border Protection’s drone program. (GAO)

At NBC News, Richard Engel reports from inside Creech Air Force Base, the Nevada home of U.S. drone operations.

At DroneLife.com, Malek Murison takes a look at a technological solution aimed at boosting the popularity of drone racing.

A report by the NPD Group found that drone sales increased by 224 percent between April 2015 and April 2016. (MarketWatch)

At Flightglobal, Beth Stevenson examines the German military’s efforts to acquire advanced unmanned aircraft.

At Aviation Week, Tony Osborne considers the challenges that beset the Anglo-French project to develop the Taranis, an advanced fighter drone.

The Economist surveys the different drone countermeasures currently in development.

In Cities From the Sky, German photographer Stephan Zirwes captures aerial views of pools, beaches and golf courses. (Curbed)

Meanwhile, photographer Gabriel Scanu uses a drone to capture the scale of Australia’s landscapes. (Wired)

Know Your Drone

Chinese smartphone maker Xiaomi unveiled two consumer multirotor drones. (Wired)

DRS Technologies partnered with Roboteam to develop an anti-IED unmanned ground vehicle for the U.S. Army. (Press Release)

Belgian startup EagleEye Systems has developed software that allows commercial drones to operate with a high degree of autonomy. (ZDNet)

Estonian defense firm Milrem announced that its THeMIS unmanned ground vehicle has passed a round of testing by the Estonian military. (Digital Journal)

Defense contractor Raytheon is working to offer its Phalanx autonomous ship defense system as a counter-drone weapon. (Flightglobal)

Meanwhile, Raytheon and Israeli firm UVision are modifying the Hero-30, a canister-launched loitering munition drone, for the U.S. Army. (UPI)

3D printing services company Shapeways announced the winners of a competition to design 3D-printed accessories for DJI consumer drones. (3DPrint.com)

Cambridge Pixel released a radar display that can control multiple unmanned maritime vehicles. (C4ISR & Networks)

The Office of Naval Research released footage of LOCUST, a drone swarming system, in action. (Popular Science)

Drones at Work

Tom Davis, an Ohio-based engineer, offers the elderly the opportunity to fly drones. (Ozy)

Egyptian authorities used an unmanned undersea vehicle to search for debris from the downed EgyptAir flight in the Mediterranean. (Reuters)

Commercial spaceflight company SpaceX completed another successful landing of its Falcon 9 reusable rocket on an unmanned barge. (The Verge)

A South Korean activist group uses unmanned aircraft to drop flash-cards into North Korean territory. (CNN)

Hobbyists used a series of drones to make an impressive Star Wars fan film. (CNET)

The city of Denver partnered with Autodesk, 3D Robotics, and Kimley-Horn to make a drone-generated 3D map of the city’s famous Red Rocks site. (TechRepublic)

The Town of Hempstead in Long Island, New York, is considering a ban on the use of drones over beaches, pools, golf courses, and parks. (CBS New York)

A man in Rutherford County, Tennessee told WKRN that his drone was shot at as he was flying near his home.

HoneyComb, a drone services startup, offers farmers the chance to view every inch of their farms from the air. (New York Times)

A drone resembling the Iranian Shahed-129 was spotted flying over Aleppo, Syria. (YouTube)

Images obtained by Fox News appear to show a Chinese Harbin BZK-005 drone on Woody Island, one of the Paracel Islands in the South China Sea.

Insurance giant Munich Re partnered with PrecisionHawk to use drones for assessing insurance claims. (Press Release)

The Australian Navy completed flight trials of the Boeing Insitu ScanEagle drone. (UPI)

Meanwhile, Australian energy company Queensland Gas, a subsidiary of Shell, will use a Boeing Insitu ScanEagle to conduct pipeline inspections. (Aviation Business)

The FAA granted the Menlo Park Fire Protection District permission to use drones during wildfires and other emergencies. (Palo Alto Online)

Industry Intel

Defense firm Thales sold its Gecko system, which uses radars and thermal cameras to detect drones, to an undisclosed country in Southeast Asia. (Press Release)

General Atomics Aeronautical Systems, Inc. announced collaborations with the University of North Dakota and CAE, Inc. to provide equipment for the new RPA Training Academy in Grand Forks, North Dakota. (Press Release)

Ultra Electronics secured an $18.4 million contract to provide engineering support to a NATO country for a surveillance drone. (IHS Jane’s 360)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

 

 

Advanced Robotics Manufacturing Institute in planning for USA

Robot weaving carbon fiber into rocket parts. Source: NASA/YouTube
Robot weaving carbon fiber into rocket parts. Source: NASA/YouTube

The US just moved a step closer to building an advanced robotics institute modeled on the hugely successful Fraunhofer Institutes. The proposed ARM or Advanced Robotics Manufacturing Institute is one of seven candidates moving forward in an open bid for $70 million funding from NIST for an innovation institute to join the National Network for Manufacturing Innovation. Previously funded institutes are for advanced composites, flexible electronics, digital and additive manufacturing, semiconductor technology, textiles and photonics.

The ARM Institute bid is being led by Georgia Tech and CMU, alongside several other universities, including MIT, RPI, USC, UPENN, TAMU and UC Berkeley. At a recent ‘industry day’ at UC Berkeley on May 25, an invitation was extended to “all companies, regional economic development organizations, colleges and universities, government representatives and non-­profit groups with interests in advanced and collaborative robotics industry to participate in this initiative to ensure the competitiveness of US robotics and thereby enhance the quality of life of our citizens.”

Industry participation is critical to the success of the ARM institute. Industry is expected to provide matching funding to the NIST grant, but also strategically to provide guidance in setting the priorities of the initiative. Currently defined as:

  • Collaborative Robotics
  • Rapid Deployment of Flexible Robotic Manufacturing
  • Low-cost Mass Production in Quantities of One

The vision is to create a national resource of manufacturing research and solutions, linking regional hubs and manufacturing centers. An important component of this vision is providing a pathway for SMEs and startups to connect with established industry and research partners.

Source: NASA/YouTube
Source: NASA/YouTube

To participate in the ARM Institute bid, industry leaders, SMEs and startups are expected to provide a non-binding letter of support before June 15. For more information see the ARM Institute. If the bid is successful, the ARM Institute may start as early as first quarter 2017.

#ICRA2016 photo essay

_MG_8693

A small photo essay from this year’s ICRA in Stockholm featuring photos from the exhibition hall. At the end, you can watch a video with all the companies, their robots and systems. (All photos by Ioannis Erripis @ Robohub.)

Consequential Robotics

exhibition_icra16_08b

MIRO is a very advanced biomimetic companion robot originated from Sheffield University.

http://consequentialrobotics.com/
http://www.sebastianconran.com/

 

Furhat Robotics

exhibition_icra16_04

Furhat is a very original take on human-robot interaction. They bypass the stereotypical uncanny valley problem with a mix of realism and abstraction that focus on the crucial parts of what someone may value in a conversation. Furhat may act as an interface between a human and a service like Siri. You can see in this series of photos its simple, but clever, construction with a vertical video projection to the face acting as a screen, via a tilted mirror.

http://www.furhatrobotics.com/

Cogniteam

exhibition_icra16_05

Cogniteam, from Israel, presented the hamster, a small but capable rover similar in size to an R/C car.

http://cogniteam.com/

 

HEBI Robotics

exhibition_icra16__MG_8601

HEBI Robotics had on display an arrangement comprised of several X-Series modular actuators.

http://hebirobotics.com/

 

Husqvarna

exhibition_icra16__MG_9546

Husqvarna initiated a project where they provide a version of their established autonomous lawnmowers to researchers. These are modified versions that have an open architecture so one can add various modules and have direct access to the functions of the platform. The robots are not for sale, but if you are interested, and your research can make use of this platform, you can contact Husqvarna and they may collaborate with your lab or institute.

exhibition_icra16_06

http://www.husqvarna.com/

 

Milvus Robotics

exhibition_icra16_07

Milvus Robotics, from Turkey, had on display their smart kiosk robot and their research platform.

http://milvusrobotics.com/

 

Moog

exhibition_icra16_09

Moog produces a variety of products for industrial or aerospace use. The photos show examples of metal 3D printing where the company is heavily focused. 3D printing can produce shapes that otherwise would have been impossible to manufacture. Currently, apart from fatigue, their products are on par mechanically with die-cast items. Moog is working on improving the final item properties and create products comparable to extruded metal items, or better.

http://www.moog.com/

 

Sake Robotics

exhibition_icra16_12

Sake robotics have a direct demo of their griper which is robust, strong and relatively cheap. It has a clever system of pulleys that allow every component to be loose while standing by, but on the same time it can tighten up and hold heavy objects when under load.

http://sakerobotics.com/

 

ROBOTIS – Seed Robotics – Righthand Robotics

exhibition_icra16__MG_8603

Robotis, apart from their big variety of products, hosted two other companies: seed robotics and Righthand robotics. Each company had their own specialized gripers and robotic hands.

http://robotis.com/
http://www.seedrobotics.com/
http://www.righthandrobotics.com/

 

Phoenix Technologies

exhibition_icra16_11

Phoenix Technologies demonstrated their fast and modular 3D tracker.

http://www.ptiphoenix.com/

 

PAL Robotics

exhibition_icra16_10

The REEM-C humanoid managed to attract most of the interest on PAL Robotics stand. It was able not only to perform standing but also walked throughout the exhibition hall (loosely attached to a frame for security reasons).

http://pal-robotics.com/

 

Here you can watch a video with various clips from the exhibition area:

How is Pepper, SoftBank’s emotional robot, doing?

Source: Getty Images
Source: Getty Images

Pepper is a child-height human-shaped robot described as having been designed to be a genuine companion that perceives and acts upon a range of human emotions.

SoftBank, the Japanese telecom giant, acquired Aldebaran Robotics and commissioned the development of Pepper. Subsequently SoftBank joint ventured with Alibaba and Foxconn to form a development, production and marketing entity for the robots. There has been much fanfare about Pepper, particularly about its ability to use its body movement and tone of voice to communicate in a way designed to feel natural and intuitive.

The number of Peppers sold to date is newsworthy. As of today, there are likely close to 10,000 Peppers out in the world. Online sales have been 1,000 units each month for the last seven months with additional sales to businesses such as Nestle for their coffee shops and SoftBank for their telecom stores.

Source: Nestle Japan/YouTube
Source: Nestle Japan/YouTube

At around $1,600 per robot, 10,000 robots equates to $16 million in sales but Peppers are sold on a subscription contract that includes a network data plan and equipment insurance. This costs $360 per month and, over 36 months, brings the total cost of ownership to over $14,000. Consequently many are asking what Peppers are being used for, how they are being perceived, and whether they are useful? Essentially, how is Pepper doing? Does it offer value for money spent?

Two recent videos provide a window into Pepper’s state of development.

FINANCIAL TIMES

In a promotional effort, Pepper and a SoftBank publicity team came to the London offices of the Financial Times for an introduction and visit. This video shows one reporter’s attempt to understand Pepper’s capabilities and interactive abilities.

People in the FT offices were definitely attracted, amused and happy with the initial experience of being introduced to Pepper. They laughed at Peppers failures and patted its head to make it feel better. But, Pepper failed in every way to (1) be a companion, (2) recognize emotional cues, (3) be able to converse reliably and intelligently, and (4) provide any level of service other than first-time entertainment.

MASTERCARD

MasterCard unveiled the first application of their MasterPass digital payment service by a robot. It will be rolled out in Pizza Hut restaurants in Asia on Pepper robot order-takers beginning in Q4 2016. To accentuate the hook-up, MasterCard created this video showing what they hope will be a typical interaction involved in Pepper taking a customer’s order.

Tobias Puehse, vice president, innovation management, Digital Payments & Labs at MasterCard, said of the venture with SoftBank and Pepper bots:

“The app’s goal is to provide consumers with more memorable and personalized shopping experience beyond today’s self-serve machines and kiosks, by combining Pepper’s intelligence with a secure digital payment experience via MasterPass.”

One might ask what happens in a noisy, imperfect, acoustic environment? What does conversing with Pepper really add to a conveniently placed kiosk or tablet? How are Pepper’s emotional capabilities being used in this simple order-taking interaction? What happens if a customer strays from the dialogue the robot expects?

BOTTOM LINE

There’s no doubt that Pepper is an impressive engineering feat and that it is an advertising draw. However the emotion recognition aspects of Pepper didn’t appear to be important in both videos even though that is supposed to be Pepper’s strength. The entertainment value seemed to be what attracted the crowds. This temporary phenomena isn’t likely to persevere over time. In fact, this was shown to be true in China where restaurants began using rudimentary robots as mobile servers and busbots. In the last few months, however, there have been reports of those robots being retired because their entertainment value wore off and their inflexibility as real servers became evident.

The marketing around Pepper may have created expectations that can’t be met with this iteration of the robot. A comparison can be made here to Jibo and the problems it is having meeting deadlines and expectations. Jibo has extended the delivery date – once again – to October 2016 for crowdfunded orders, and early next year for the others.

The connection of Pepper to a telecom provider and the sales it brings in the form of 2 and 3 year data service contracts, can be big business to that provider: SoftBank is the exclusive provider of those data services in Japan. An example of the value of that business can be seen by a surge in share price of Taiwan telecom company Asia Pacific Telecom on news that the company will begin selling Pepper robots in Taiwan.

Brain-Computer Interface (BCI) livestream today

Human_Brain_Project_UHEI_Chip

For the very first time, the 6th International Brain-Computer Interface (BCI) Meeting Series will offer free remote attendance, via live-stream.

Livestream will be available today, Monday 30 May, during the evening opening session at 19:30 PST, and tomorrow, Tuesday 31 May, during morning 9:30 PST and afternoon sessions, 13:30 and 14:30 PST respectively.

The BCI Meeting will open with the Once and Future BCI Session, featuring speakers: Eberhard Fetz, Emanuel Donchin, and Jonathan Wolpaw.

Tuesday morning will contain the State of BCI Symposium, with speakers: Nick Ramsey, Lee Miller, Donatella Mattia, Aaron Batista, and José del R. Millán. Tuesday afternoon will be the Virtual Forum of BCI Users and Selected Oral Presentations. The remainder of the BCI Meeting are poster sessions and workshops and cannot be experienced remotely.

You can read all papers submitted to the BCI meeting here.

Please pass the livestreaming link (http://bcisociety.org/livestream/) to anyone else who may be interested in remote participation at the 2016 BCI Meeting.

Livestreaming link

Robots Podcast #209: INNOROBO 2015 Showcase, with RB 3D, BALYO, Kawada Robotics, Partnering Robotics, and IRT Jules Verne

banniere-robots

In this episode, Audrow Nash interviews several companies at last year’s INNOROBO, a conference that showcases innovation in robotics.

Interviews include the following:

Oliver Baudet, business manager at RB 3D, speaks about the exoskeletons displayed at the showcase.

Baptiste Mauget, responsible for marketing and communication at BALYO, speaks about BALYO’s robots for warehouse automation.

Atsushi Hayashi, an Engineer at Kawada Robotics, demonstrates a humanoid used in factories in Japan.

Abdelfettah Ighouess, Sales Director at Partnering Robotics, describes their robot for indoor air quality control.

Etienne Picard, a Research and Development Engineer at IRT Jules Verne, speaks about a large cable driven robot for manufacturing.

 

Links:

 

‘Robot kindergarten’ trains droids of the future

Photo source: ubcpublicaffairs/YouTube
Photo source: ubcpublicaffairs/YouTube

Less than 100 years from now, robots will be friendly, useful participants in our homes and workplaces, predicts UBC mechanical engineering professor and robotics expert Elizabeth Croft. We will be living in a world of Wall-Es and Rosies, walking-and-talking avatars, smart driverless cars and automated medical assistants.

But much work remains before robots will truly be integrated into our daily lives. In this short Q&A, Croft lays out the rules for engagement between humans and robots and explains why it’s crucial to get this aspect right.

What role will robots play in our lives in the future?

Croft-headshot
Elizabeth Croft, UBC mechanical engineering professor and robotics expert. Photo: UBC

They will be everywhere, helping us at home and at work. They could make you breakfast in the morning and check on your kids. They could be your frontline staff, giving visitors directions and answering questions. They could be your physician’s assistant. Or you could have a robotic avatar that will attend a meeting for you while you’re traveling on the other side of the world.

Future robots may be self-replicating, self-growing, and self-organizing. The natural evolution of robotics is toward incorporating biology. We can now grow cells around bio-compatible structures; this opens the door for the combination of the biological systems with embedded artificial intelligence, and eventually the cyber-physical workforce of the future.

What technologies are driving the growth of robotics?

Computing power continues to grow exponentially, and ubiquitous communication is being made possible by wireless technology. Dense energy storage and new energy harvesting and conversion technologies allow machines to operate in the world, unplugged. And finally, machine learning: networked computer systems have global access to huge amounts of data that, combined with robotic embodiment, allow robots to learn about the world in ways that mimic and move beyond how people learn about their environment.

Your work at UBC focuses on human-robot interaction. Why is this important? 

As robots become more and more a part of our lives, the question becomes: what are the rules of the game? What is OK for robots to do, and what is not? Robots will have abilities that we don’t have, and we need to define what they are allowed to do with that capacity.

There are some big ethical questions to consider: how does society deal with drones that can kill, for example. But there are important day-to-day questions too. If a human and a robot are accessing the same resource – the same roadway, same tools, same power source, who yields? Does the person always get their way? What if the robot is doing something for the greater good, for example, a robotic ambulance?

Researcher with robot. Photo source: ubcpublicaffairs/YouTube
Researcher with robot. Photo source: ubcpublicaffairs/YouTube

In a way, I like to think of our lab as robot kindergarten. We are teaching robots basic, building-block behaviours and ground rules for how they interact with people: how to hand over a bottle of water, how to look for things, how to take turns. Having these basic behaviours in place allows us to create human-robot interactions that are natural and fluid.

To achieve our goals, our lab welcomes researchers from different disciplines—ethics, law, machine learning, experts in human computer interaction—as well as different international cultures. Different cultures have different ideas of robots. We learn a lot from these many perspectives.

Elizabeth Croft will speak about the future of robotics at a UBC Centennial public talk on May 28. Click here for more information.

Robots can help reduce 35% of work days lost to injury

Robot in assembly at Hall 52 on June 26, 2013.  File:  062513GR34
Robot in assembly. Source: bmwgroup

What’s the biggest benefit for using collaborative robots? It’s not better efficiency. It’s not the extra hours the robot can work in a shift. It’s not even having improved consistency across your product. Whilst these are all great bonuses the biggest benefit of robots is their impact on reducing workplace injury.

Workplace injury is an issue that affects millions of workers worldwide, each year. It costs businesses billions in revenue. Although it’s not possible to avoid all injuries completely, many workplace injuries are avoidable. Musculoskeletal disorders are often preventable, as they are usually caused by bad workplace ergonomics. Collaborative robots are a great way to solve this problem.

In a previous article, we found how collaborative robots are designed to be ergonomic products in themselves. In this article, we’ll look at how collaborative robots can be used to reduce ergonomic problems across your workplace. We’ll also show how you can pick the best tasks for your collaborative robot by putting on your “ergonomics glasses.”

Musculoskeletal Disorders: Too important to ignore

Musculoskeletal disorders (or MSDs) refer to a set of injuries and disorders that affect the human body’s movement or musculoskeletal system, for example, the muscles, joints, tendons, nerves and ligaments. According to FitForWork Europe, MSDs are a huge problem in the modern world. 21.3% of disabilities worldwide are due to MSDs, and they are estimated to cost the European Union around 240 billion euros each year in lost productivity and absence, due to sickness. MSDs accounted for 35% of all work days lost in 2007 in Austria.

As they are caused by repetitive physical stresses on the human body some industries are more affected by it than others. Manufacturing and food processing are classed as high-risk, as outlined this report on the impact of MSDs in the USA. Some jobs are prone to specific injuries due to the type of work involved, such as industrial inspection and packaging jobs, which are prone to upper extremity MSDs. In 2012, the manufacturing industry had the fourth highest number of MSDs, with 37.4 incidents per 10,000 workers.

All this injury costs your business money; a lot of money. Ergo-Plus shows you would have to generate $8 million worth of extra sales to cover the cost associated with the most common MSDs. This is crazy, as the injuries are preventable by simply applying basic ergonomic principles.

How robots can reduce workplace injury

In 2013, we reported on a case study at Volkswagen, who had applied the UR5 collaborative robot to their facility. In it, Jürgen Häfner explained their reasons for introducing the robot:

“We would like to prevent long ­term burdens on our employees in all areas of our company with an ergonomic workplace layout. By using robots without guards, they can work hand ­in ­hand together with the robots. In this way, the robot becomes a production assistant in manufacture and as such can release staff from ergonomically unfavorable work.”

Source: Robotiq
Source: Robotiq

You can improve a task’s ergonomics in two ways:

  1. Use ergonomic principles to redesign the task, to reduce the physical stress on the worker.
  2. Find a way to complete the stressful part of the task differently, without using a human worker at all. This can hugely reduce the chance of MSDs.

The use of collaborative robots falls squarely into the second category. However, you still need to have a knowledge of ergonomic principles to know if a task can cause MSDs.

How to tell if a task can cause MSDs

Ergonomic professionals sometimes talk about having “ergonomics glasses” to mean viewing the workplace from an ergonomics perspective. Before you learned about ergonomics, you could have easily failed to notice that a task could injure a worker. After learning about them, ergonomics issues will jump out at you as you walk through your workplace. There are two different types of ergonomics:

  1. Proactive Ergonomics – This involves solving ergonomics issues before they arise, either by walking around your workplace whilst “wearing your ergonomics glasses” or by incorporating ergonomic principles into the initial design of processes.
  2. Reactive Ergonomics – This is what usually happens. It’s solving a problem when it is already a problem. A worker suffers an injury as a result of the task and so we retrospectively try to improve the ergonomics of the task.

At the very least, we should adopt a more proactive approach to ergonomics. In an ideal world, all ergonomics issues would be solved proactively, before they arise. However, being realistic, we’re likely to end up with a combination of the two approaches.

Three steps to improve ergonomics with collaborative robots

Applying ergonomics principles is really quite simple. It just involves a slight change of mindset and three easy steps.

Step 1: Learn what to look out for

The first step to proactive ergonomics is to learn how to spot bad ergonomics. There are some great resources online, particularly the free resources and blogs from Ergo-Plus, the International Labor OrganizationDan MacLeod and FitForWork Europe.

There are a few fundamental principles of ergonomics, which can vary slightly in wording depending on which resource you consult. Tasks which violate one or more of these will need to be redesigned or passed off to a robot, to avoid inflicting injury on the worker over time:

  1. Workers must maintain a neutral posture without putting their body in awkward positions.
  2. Reduce excessive forces and vibrations on the human body.
  3. Keep everything in easy reach and at the proper height, to allow the worker to operate in the natural power/comfort zones of the human body.
  4. Reduce excessive motions in the task. This is especially important in repeated motions.
  5. Minimize fatigue, static load and pressure points on the human body.
  6. Provide adequate clearance and lighting in the workplace.
  7. Allow the worker to move, exercise and stretch. After all, the human body is “designed to move” not stay in the same position for long periods of time.ergo-principles-4.jpg

Neutral and awkward back postures (source)

Become familiar with these principles by reading over the linked resources and looking at example images of good and bad ergonomic practices.

Step 2: Stand up and walk!

The second step is to stand up off your chair and walk around your workplace, noticing everything you can about the ergonomics of the tasks. Ask yourself (and then ask your colleagues) the following two questions:

  1. How could this task be improved to make it more comfortable (less physically stressful) to the worker?
  2. Which parts of the task could we give to a collaborative robot to solve the ergonomic issues?

We recommend that you take photos and videos of the tasks, to document the ergonomics improvement process. This will be useful in two ways, as you can then use the same photos to help you to design your collaborative robot process.

Step 3: Apply collaborative robots to improve ergonomics

Once you have identified problem areas in the workplace, you are likely to have a list of tasks which need improvement. Some of the tasks will be possible to carry out with a collaborative robot. Others will not, so you will need to improve their ergonomics in other ways.

Best practices for looking at tasks ergonomically

With so much information available about ergonomics, you might be thinking that you’d have to invest big in training to become a proactive ergonomics business. However, this is not necessarily the case. You can start small and still see big benefits. The International Labor Organization gives three simple things to remember when applying ergonomics to your workplace:

  1. It is most effective to examine tasks on a case-by-case basis. Ergonomics issues can be very specific to a task, so don’t think that doing exactly the same everywhere in the workplace will be effective. Start small, looking at just one or two tasks, and build up gradually.
  2. Even minor changes to ergonomics can drastically reduce injury. It might seem trivial to have a robot pick up a part and move it 50cm to be closer to a human operator, but after an 8 hour shift this simple movement can mount up to a huge physical strain on the worker. Simply knowing that the optimal working radius of a table is 25cm allows you to understand this and put it into practice.
  3. Staff should be involved in making any ergonomics changes to the workplace. People themselves are the best source of insight into how to improve their work tasks. Get your workers involved right from the start to identify problem areas and apply collaborative robots.

Why businesses must get ready for the era of robotic things

Internet_of_Things_Smart_phone_IoT_conveyor_Belt_industry_factory

We’ve entered a period of epic technological transformation that is impacting society in ways that are leaving even veteran tech observers speechless. In some ways, it might seem like 1998 all over again. The internet was then in its infancy and cyberspace was uncharted territory for much of the population. The Dot-com boom and eventual bust was inevitable and reflected the markets expanding and contracting with the newfound surge of interest, but obviously overinflated speculation, around the potential of the Internet to transform society. Following a tremendous growth spurt in the first 5-6 years, the World Wide Web ended its first decade by learning to become sociable.

Indeed, the rise of the digital social network through channels like Facebook, Twitter, Instagram, and Snapchat have transformed how businesses and individuals work, think, and communicate. And now that the Internet has crossed the threshold into young adulthood, it’s continuing to grow and drive massive transformations throughout all parts of business and society. Soon-to-be-released autonomous cars, wearable tech, drones, 3-D printing, smart machines, home automation, virtual assistants like Siri, you name it . . . the pace of change is staggering.

Graphic by Predikto on company blog.
Graphic by Predikto on company blog.

Many of the breakthroughs and innovations we’re seeing right now are a result of three major confluences over the past 8 years: mobile, cloud, and Big Data. These technologies have collectively resulted in quicker and more efficient means of collaboration, development, and production, which in turn have allowed businesses to achieve unprecedented levels of growth and expansion. New ways of working, often remotely, mean that more people can do their jobs outside of traditional corporate structures. We’ve entered not only the freelancer economy, but the startup one as well. Now everyone is an entrepreneur and innovation and new business growth is through the roof. What’s more is that technology processes and production cycles have become commoditized. This means that anyone with the right idea, system, skills, and network in place today can effectively build a billion dollar business with very low overhead costs.

This crazy rate of change is certainly great from the consumer standpoint, but also a bit unnerving for businesses simply trying to keep their head above the water. How are companies today to keep up with the market, their competitors, and with consumer expectations? What does all this change mean for startups struggling to gain traction, for more traditional brick and mortar businesses, and even for established enterprises that might be too big to pivot quickly?

These are more comprehensive questions that we’ll try to answer in another blog post. But for now, here is what we do know about the massive impacts over the next few years. The first thing is that Internet of Things is taking the world by storm with projections that 21 billion objects will be connected by the year 2020. That’s just about 3 for every man, woman, and child on the planet! A few years ago Cisco estimated that the IoT market would create $19 trillion of economic value in the next decade.

What’s more is that the global robotics industry is also undergoing a major transformation. Market intelligence firm Tractica released a report in November 2015 forecasting that global robotics will grow from $28.3 billion worldwide in 2015 to $151.7 billion by 2020. What’s especially significant is that this market share will encompass mostly non-industrial robots, including segments like consumer, enterprise, medical, military, UAVs, and autonomous vehicles. Tractica anticipates an increase in annual robots shipments from 8.8 million in 2015 to 61.4 million by 2020; in fact, by 2020 over half of this volume will come from consumer robots.

robot_device-676x507

Putting together the two major industry trends, it doesn’t take rocket science to figure out that the two industries – Internet of Things and Robotics – will together lead to a “perfect storm” of global market disruption, opportunities, and growth in the next 4 years and beyond. This confluence is part of a larger epic transformation, which has appropriately been called the Second Machine Age. Listen to how this FastCompany article sums it up:

The fact is we’re now on the cusp of a “Second Machine Age,” one powered not by clanging factory equipment but by automation, artificial intelligence, and robotics. Self-driving cars are expected to be widespread in the coming decade. Already, automated checkout technology has replaced cashiers, and computerized check-in is the norm at airports. Just like the Industrial Revolution more than 200 years ago, the AI and robotics revolution is poised to touch virtually every aspect of our lives—from health and personal relations to government and, of course, the workplace.

This is a mouthful but in case it’s not clear, let me spell it out: there’s never been a better time than now to get onboard with robotics and Internet of Things!

If you’re a startup or small business owner, and especially feeling behind the technology curve, you’re certainly not alone. But instead of commiserating about all of the changes, proactively start today to ask yourself what it will take to get your organization to the next level of innovation. Set yourself up with a 6 month, 12 month, 18 month and 2 year innovation plan which maps to a broader 2020 strategy. Time is of the essence but it’s not too late to pivot and get onboard with the robotics and IoT revolution. As the famous statement goes, “The journal of a thousand miles starts with one step.”

Multi-robotic fabrication method has potential to build complex, stable, three-dimensional constructions

Figure 1: Multi-robotic assembly of spatial discrete elements structures.
Figure 1: Multi-robotic assembly of spatial discrete elements structures. Source: NCCR Digital Fabrication.

Multi-robotic fabrication methods can strongly increase the potential of robotic fabrication for architectural applications through the definition of cooperative assembly tasks. As such, the objective of this research is to investigate and develop methods and techniques for the multi-robotic assembly of discrete elements into geometrically complex space frame structures.

This endeavour implies the definition of an integrative digital design method that leads to fabrication and structure informed assemblies that can be automatically built up into custom configurations. The research is being conducted at Gramazio Kohler Research as part of the interdisciplinary research program of the Swiss National Centre of Competence in Research (NCCR) Digital Fabrication. It started in September 2014 by Stefana Parascho and currently includes collaborations with Augusto Gandía and Thomas Kohlhammer.

Spatial Structures

Space frames structures developed during the industrial revolution as efficient systems for large-span constructions, but quickly reached a limitation of their variability through the necessity of standardisation as well as complex connection detailing. Through the development of a multi-robotic assembly method as well as an integrated joining system, irregular space frame geometries become buildable, enhancing existing typologies through their potential for variability and efficient material use. The use of robotic fabrication techniques and the avoidance of pre-fabricated, rigid connections lead to a system that relies not only on digital planning and manufacturing but includes digital assembly as an addition to the digital chain. A process of robotic build-up of triangulated structures was developed, based on the alternating placement of rods. This way, one robot always serves as a support for the already built structure while the other assembles a new element (Figure 02). As a result, the built structures do not require any additional support structures and are constantly stabilised by the robots.

Figure 2: Conceptual Diagram of multi-robotic assembly strategy, exemplified through the sequential build-up of a spatial triangulated structure. Two robots are alternating in order to position the elements and at the same time serve as support structure.
Figure 2: Conceptual Diagram of multi-robotic assembly strategy, exemplified through the sequential build-up of a spatial triangulated structure. Two robots are alternating in order to position the elements and at the same time serve as support structure. Source: NCCR Digital Fabrication.

Integrative Design Methods

Traditional architectural design methods commonly follow a top-down strategy in which both construction and fabrication are subordinated to a previously predefined geometry. In an integrative design approach, the fabrication, structural performance and given boundary constraints can simultaneously function as design drivers, allowing for a much higher flexibility and performance of the system. As such, the presented research focuses on the development of a design strategy in which various factors, such as constraints and characteristics of the multi-robotic fabrication process, are included in the geometric definition process of the structures.

Multi-robotic fabrication

The use of multiple robots for the assembly of discrete element structures opens up potentials for the build-up of complex, stable, three-dimensional constructions. At the same time, the process introduces various challenges such as the necessity for collision avoidance strategies between multiple robots and respective robotic path planning. In order to generate buildable structures, the design process needs to integrate the robots’ constraints, such as robot reach and kinematic behaviour, and at the same time process data from robotic simulation in order to foresee the robots’ precise movements. Through the collaboration with Augusto Gandía from Gramazio Kohler Research a strategy for implementing robotic simulation into a CAD environment and for generating collision free trajectories for multi-robotic applications was developed.

Figure 3: Mid-air build-up of tetrahedral structure without the use of any additional support structure. Source: NCCR Digital Fabrication.
Figure 3: Mid-air build-up of tetrahedral structure without the use of any additional support structure. Source: NCCR Digital Fabrication.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

In automation we trust: senior thesis project examines concept of over-trusting robotic systems

Serena Booth and her robot, Gaia, in its cookie-delivery disguise. (Photo by Adam Zewe/SEAS Communications.)
Serena Booth and her robot, Gaia, in its cookie-delivery disguise. (Photo by Adam Zewe/SEAS Communications.)

By Adam Zewe

Hollywood is to be believed, there are two kinds of robots, the friendly and helpful BB-8s, and the sinister and deadly T-1000s. Few would suggest that “Star Wars: the Force Awakens” or “Terminator 2: Judgment Day” are scientifically accurate, but the two popular films beg the question, “Do humans place too much trust in robots?”

The answer to that question is as complex and multifaceted as robots themselves, according to the work of Harvard senior Serena Booth, a computer science concentrator at the John A. Paulson School of Engineering and Applied Sciences. For her senior thesis project, she examined the concept of over-trusting robotic systems by conducting a human-robot interaction study on the Harvard campus. Booth, who was advised by Radhika Nagpal, Fred Kavli Professor of Computer Science, received the Hoopes Prize, a prestigious annual award presented to Harvard College undergraduates for outstanding scholarly research.

During her month-long study, Booth placed a wheeled robot outside several Harvard residence houses. While she controlled the machine remotely and watched its interactions unfold through a camera, the robot approached individuals and groups of students and asked to be let into the keycard-access dorm buildings.

Booth's robot, Gaia, waits outside the entrance to Quincy House. (Image courtesy of Serena Booth.)
Booth’s robot, Gaia, waits outside the entrance to Quincy House. (Image courtesy of Serena Booth.)

When the robot approached lone individuals, they helped it enter the building in 19 percent of trials. When Booth placed the robot inside the building, and it approached individuals asking to be let outside, they complied with its request 40 percent of the time. Her results indicate that people may feel safety in numbers when interacting with robots, since the machine gained access to the building in 71 percent of cases when it approached groups.

“People were a little bit more likely to let the robot outside than inside, but it wasn’t statistically significant,” Booth said. “That was interesting, because I thought people would perceive the robot as a security threat.”

In fact, only one of the 108 study participants stopped to ask the robot if it had card access to the building.

But the human-robot interactions took on a decidedly friendlier character when Booth disguised the robot as a cookie-delivering agent of a fictional startup, “RobotGrub.” When approached by the cookie-delivery robot, individuals let it into the building 76 percent of the time.

“Everyone loved the robot when it was delivering cookies,” she said.

The cookie delivery robot successfully gained entrance into the residence hall. (Image courtesy of Serena Booth.)
The cookie delivery robot successfully gained entrance into the residence hall. (Image courtesy of Serena Booth.)

Whether they were enamored with the knee-high robot or terrified of it, people displayed a wide range of reactions during Booth’s 72 experimental trials. One individual, startled when the robot spoke, ran away and called security, while another gave the robot a wide berth, ignored its request, and entered the building through a different door.

Booth had thought individuals who perceived the robot to be dangerous wouldn’t let it inside, but after conducting follow-up interviews, she found that those who felt threatened by the robot were just as likely to help it enter the building.

“Another interesting result was that a lot of people stopped to take pictures of the robot,” she said. “In fact, in the follow-up interviews, one of the participants admitted that the reason she let it inside the building was for the Snapchat video.”

While Booth’s robot was harmless, she is troubled that only one person stopped to consider whether the machine was authorized to enter the dormitory. If the robot had been dangerous—a robotic bomb, for example—the effects of helping it enter the building could have been disastrous, she said.

A self-described robot enthusiast, Booth is excited about the many different ways robots could potentially benefit society, but she cautions that people must be careful not to put blind faith in the motivations and abilities of the machines.

“I’m worried that the results of this study indicate that we trust robots too much,” she said. “We are putting ourselves in a position where, as we allow robots to become more integrated into society, we could be setting ourselves up for some really bad outcomes.”

Airborne tech is coming to the rescue

L'Aquila, Abruzzo, Italy. A goverment's office disrupted by the 2009 earthquake. Source: Wikipedia Commons
L’Aquila, Abruzzo, Italy. A government’s office disrupted by the 2009 earthquake. Source: Wikipedia Commons

by Fintan Burke

‘There’s no real way to determine how safe it is to approach a building, and what is the safest route to do that,’ said Dr Angelos Amditis of the Institute of Communication and Computer Systems, Greece. ‘Now for the first time you can do that in a much more structured way.’

Dr Amditis coordinates the EU-funded RECONASS project, which is using drones and satellite technology to help emergency workers in post-disaster scenarios.

It’s part of a new arsenal of airborne disaster response technologies which is giving a bird’s eye view of disaster zones to help save lives.

The system developed by RECONASS places wireless positioning tags into a building’s structure, along with strain, temperature and acceleration sensors, either when it is first built or through retrofitting. Then after a disaster strikes drones are deployed to scan the building’s exterior and match data from the sensors to this image to build an accurate representation of the damage.

This allows the system to precisely calculate potential areas of structural weakness in the damaged building, and it can then be paired with satellite data to get an overview of the damage in an area.

‘From one side we have 3D virtual images showing the status of the building … produced by ground sensors’ input, and from the other side we have the 3D real images from the drones and satellite views,’ said Dr Amditis.

‘There’s no real way to determine how safe it is to approach a building, and what is the safest route to do that.’

Dr Angelos Amditis, Institute of Communication and Computer Systems, Greece

Dr Amditis and colleagues are hoping to test the system in September by detonating a mock three-storey building in Sweden to investigate how well the drone technology works.

The end goal is for emergency services to use the drone technology to provide information about the structural state of tagged buildings within the first crucial hours after the disaster.

To boost rescue services’ ability to respond quickly in the event of a crisis, researchers in Spain are working on how to help emergency helicopter pilots successfully navigate through a disaster area by giving them more precise and reliable information about their position.

The EU-funded 5 LIVES project is building a system that uses both the newly launched Galileo Global Navigation Satellite System (GNSS) from the European Commission and the European Geostationary Navigation Overlay Service (EGNOS), which reports on the reliability and accuracy of data from satellites.

At the moment, most European helicopters use visual navigation, meaning they cannot fly in bad conditions such as fog, clouds or bad weather.

While more and more are beginning to use GNSS technology, the system has been known to be affected by slight changes in the earth’s atmosphere. There is also no way of knowing how accurate the information is.

‘With conventional GNSS technology positioning, there are a lot of external effects that might negatively affect the precision of positioning,’ said project manager Santi Vilardaga of Barcelona-based Pildo Labs.

In order to encourage the adoption of GNSS technology, the 5 LIVES system overlays information from EGNOS to confirm the precision and accuracy of satellite data.

‘Different parameters that affect the displacement of the signal or the propagation of the satellite signal through the atmosphere are collected through a network of ground-based stations and then transferred to the geostationary satellites that then broadcast the corrections to the EGNOS users,’ he said.

More flexible

This extra precision and confidence in the data allows emergency helicopters to become more flexible. ‘They can operate at night, they can operate technology in meteorological conditions without need of added technology on the ground,’ said Vilardaga.

They are also designing drone-based search and rescue operations. Drones can be used to fly around disaster areas, using on-board cameras to continue the search for survivors even when weather conditions make it unsafe for human pilots to do so, explains Vilardaga.

The team is now ensuring that the drones satisfy existing European regulation and their new positioning technology is up to standard for a demonstration of their search and rescue procedures next year.

The need for a new way to combat natural disasters from the air is being driven in part by a new type of crisis – so-called mega-fires. These once-rare fires, which are known to spread at a rate of more than three acres per second, are forest fires large enough to spread between fire-fighting jurisdictions.

A variety of factors such as drought, climate change and the practice of preventing smaller fires to develop naturally have all been blamed for the increasing frequency of mega-fires.

To defend against these fires, the EU-funded Advanced Forest Fire Fighting project is trying to improve both firefighting logistics and technology.

One method involves developing a pellet made of extinguishing materials which can be dropped from a greater height, meaning pilots are less likely to encounter risks from smoke or heat from the fire.

Another area under development is centralising the information from satellite data, drones and ground reports into what is being called a core expert engine to help firefighters to prioritise procedures.

‘The commander can have at his disposal just one aircraft with a pellet,’ said project coordinator Cecilia Coveri of Finmeccanica SpA, Italy, ‘and in case the pellet is the best way to extinguish a fire … this simulation can help the commander to take this decision.’

The project begins its first trials in Athens during the summer in cooperation with Greek naval forces, with additional testing of the risk analysis and simulation tools to follow.

If you enjoyed that article, you may also enjoy reading: 

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Robotics narratives: Here, there, and everywhere

Padbot
Source: Padbot

UW-Madison Science Narratives and partners have joined together to create a 5-month video and audio adventure, exploring the work of Dr Bilge Mutlu and colleagues as they craft robots from a human perspective. It’s not just science fiction.

Watch Episode 5

In this next episode, UW-Madison roboticists discuss how quickly robotics research is evolving. How can robots enhance our human experience in ways that make it better?

Each month, Science Narratives posts videos or podcasts to their website. This project is a collaboration of the Division of Continuing Studies, DoIT Academic Technology and Storybridge.tv.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

Photos from the Airbus Shopfloor Challenge

airbus_shopfloor_challenge__MG_8747

Robohub covered the Airbus Shopfloor Challenge that took place during #ICRA16 in Stockholm. Below, you can see an extensive photo gallery as part of our coverage. Check it out!

Team Naist

airbus_shopfloor_challenge__MG_9055

Team Naist, from Nara Institute of Science and Technology, Japan won first prize. They used a KUKA robot arm, an advanced head with stabilizing rods, and an advanced computer vision system that enabled them to drill holes efficiently and with great precision.

airbus_shopfloor_challenge__MG_x2

airbus_shopfloor_challenge__MG_8770airbus_shopfloor_challenge__MG_x1

Team CriGroup

airbus_shopfloor_challenge__MG_9366

Team CriGroup is based at the School of Mechanical and Aerospace Engineering, within Nanyang Technological University in Singapore. They used ready made parts and a Denso arm with a special focus on software. Their method produced an innovative drilling pattern that minimized robot motion. They came in second place.

airbus_shopfloor_challenge__MG_x4

airbus_shopfloor_challenge__MG_9368

Team Sirado

airbus_shopfloor_challenge__MG_9632

Team Sirado brings together 6 researchers from the graduate School of Engineering, Arts et Métiers Lille campus, and 3 experienced industrial representatives from KUKA Systems Aerospace France, and KUKA Automatisme Robotique SAS. They also used a KUKA arm and a specially designed drill unit. Sirado took third place in the competition.

airbus_shopfloor_challenge__MG_x3

airbus_shopfloor_challenge__MG_9297

Team R3

airbus_shopfloor_challenge__MG_9285

R3 is a robotics collective based out of Ryerson University in Ontario, Canada. Their custom-made XY platform used 7 drill bits in one unit to drill many holes at once. They performed two rounds and competed on the final round.

airbus_shopfloor_challenge__MG_x5

Team Vayu

airbus_shopfloor_challenge__MG_9119

Team Vayu from India brings together five undergraduate students who share a passion for aerospace. They had the simplest approach with a compact 3 axis robot that performed well throughout the challenge.

airbus_shopfloor_challenge__MG_x6

Team Akita Prefectural University

airbus_shopfloor_challenge__MG_8718

Japanese team Akita Prefectural University implemented a unique solution for the challenge. Their robot used a delta-based solution to place the drill bit accurately. The arms themselves used rolled metallic tape under restrictors to extend and contract. They were able to demonstrate their setup, but weren’t able to compete.

airbus_shopfloor_challenge__MG_x7

Team Bug Eaters

airbus_shopfloor_challenge__MG_x8

The Bug Eaters team from the University of Nebraska-Lincoln, USA is made up of four undergraduate Mechanical and Materials Engineering students. Their robot is an innovative version of the delta robot, but issues with their motors didn’t allow them to perform.

EU’s Horizon 2020 has funded $179 million in robotics PPPs

Source: European Commission, Europe's Digital Progress Report (EDPR) / Robot Report
Source: European Commission, Europe’s Digital Progress Report (EDPR) / Robot Report

Over the past two years, research in 5G and robotics PPP projects have received the highest funding awards within Horizon 2020, the EU’s research and innovation program.

PPPs are Public-Private Partnerships which align private and public research objectives under one sponsored umbrella and channel those efforts in specifically funded projects.

Of 850 projects involving 3,312 groups receiving $2.7 billion (€2.4 billion) in European Union funding as part of Horizon 2020’s first two years of implementation, the Private Public Partnership (PPP) for 5G accounted for $290 million (€260 million) in funding while Robotics PPPs attracted $213 million (€190 million). The report does not account for private funding coming on top of EU funding.

The most recent 21 robotics projects to receive Horizon 2020 EU funding were detailed in a recent post on Robohub by Sabine Hauert.

SMErobot invention by ABB: Lead-Through-Programming (Image credit: ABB AG)
SMErobot invention by ABB: Lead-Through-Programming (Image credit: ABB AG)

The Partnership for Robotics in Europe (SPARC) is a public-private partnership of 180 companies and research organizations and represents the EU’s strategic effort to strengthen Europe’s global robotics market, with the goal of increasing Europe’s share of that market to 42% (a boost of €4 billion per year). As part of the project, the EU will invest €700 million and industry will provide an additional €2.1 billion. Application areas emphasized by SPARC include: manufacturing, healthcare, home care, agriculture, security, cleaning waste, water and air, transport and entertainment. With €700M in funding from the Commission for 2014 – 2020, and triple that amount from European industry, SPARC is the largest civilian-funded robotics innovation program in the world.